[Q] Difference between armv6-800MHz and armv7-1024MHz - XPERIA X8 Q&A, Help & Troubleshooting

I am bit confused ,
There is only a difference of 200 MHz and a armv7 processor can handle much more than a armv6 processor.
Many games today releases for armv7 or for the phones with 1024MHz processors,
My x8 reaches a top of 806MHz,
As an example,
i played GTA 3 without sound,
with increment of 200MHz others can play even gta Vice city with HD.
I gonna be mad, whats wrong here,
Please someone elaborate!!!

Difference between these 2 types of processors is not just about CPUClock, arm7 processors also have a hidden memory called "cache memory" ,this memory is expensive & absolutely super-fast so CPU don't have to wait for data to transfer from RAM (slow compared to cache mem)
This cache memory in armv7 devices is about 1 megabytes, but in armv6 its just afew kilobytes
It's not just cache, CPU design in armv7 is different ,some games use special libs which is included in armv7 roms, etc,etc........
More info:
http://android.stackexchange.com/questions/21137/difference-between-armv6-armv7-processors
&
http://webcache.googleusercontent.c...ifference+between+armv6+&+armv7&hl=en&ct=clnk
Sent From My "ULTIMATE ROM- gb - WP8 edition" via Tapatalk

The differences between armv6-800 and armv7-1000 is NOT only the 200MHz.
The real difference is the processor design. The armv6 is older design (on x8 this is an arm1136 core), the armv7 is newer with more "knowledge". More complex instruction sets (the armv7 have special instructions what we not have) higher IPC (instructions per clocks), etc... Example: our x8 have an arm1136 core (possible to overclock to 800MHz), the xperia mini have scorpion core (similar to the popular cortex a9 design, but a little bit faster) what is 1GHz on default. But the scorpion core have newer float point unit (vfpv3 againt our vfp), have the armv7a addons in the instruction set, have the NEON extension (what is an audio/video proccessing accelerator), have the thumb-2 extensions (what is double fast than the old thumb-1) and finally: have newer GPU (adreno 205 against our adreno 200, what is theoretically have double computing speed).
So, if you compare the two phones, you can see: on paper the differences is small (400MHz (only 200 with oc) in cpu speed, near same gpu, same resolution), but with heavyweight apps (usually the hd games) the mini can be 2-3 times faster than our phone.

Related

Droid/Milestones' weak graphic performance. OC possible?

The Droid2/X use the same graphic processor as Droid 1, which is PowerVR SGX 530. According to the datasheet, this core is designed to run at 200Mhz with power of rendering 14M triangles/sec. But our Droid/Milestone runs underclocked at 110Mhz(7M tri/s) while D2/X at 200Mhz. That leads to major UI responsiveness&gaming difference between D2&D1.
I wonder if there's any possibility to overclock the GPU as well?
Thanks in advance.
Sent from my Milestone using XDA App
TeroZ said:
The Droid2/X use the same graphic processor as Droid 1, which is PowerVR SGX 530. According to the datasheet, this core is designed to run at 200Mhz with power of rendering 14M triangles/sec. But our Droid/Milestone runs underclocked at 110Mhz(7M tri/s) while D2/X at 200Mhz. That leads to major UI responsiveness&gaming difference between D2&D1.
I wonder if there's any possibility to overclock the GPU as well?
Thanks in advance.
Sent from my Milestone using XDA App
Click to expand...
Click to collapse
As far as I know this has been tried (overclocking), but with no results (constant reboots)
Imagination Technologies (PowerVR) defines the GPU internals and sells the "plans" for the part, to be included in SOCs like TI's OMAP.
But PowerVR does not, however, define the exact clocks at which the parts should run, nor other things like number of memory channels, memory speed, etc.
Texas Instruments are the ones who defined the GPU clocks. The OMAP 34xx chips (Droid 1, Milestone, XT720, Flipout, etc) are made using 65nm process, and that determines a certain power consumption using certain clocks, hence why they defined a ~100MHz clock for the GPU and ~600-800MHz for the CPU.
The OMAP 36xx (Droid X, Droid 2, Defy, etc) are made using a newer, smaller 45nm process, which allows them to run at higher speeds while spending approx. the same power, which is why Texas Instruments decided to clock the GPU at ~200MHz and the CPU at ~1-1.2GHz.
So it's not like the Milestones and Droids have their GPUs underclocked, those are just their factory clocks.
Of course, overclocking the GPU would be nice and it could be possible. If someone found out how to change the GPU's voltage and clocks, I'm sure it could come in handy in future games.
However, right now, the 1st gen Milestones/Droids are running every high-end HD game from gameloft at full speed, and I bet it'll even do Infinity Blade and other UE3 games when they're out for Android.
Every "HD" Android game has to be compatible with the 1st-gen Snapdragon's GPU, the Adreno 200, which is a lot slower than the SGX530 @ 100MHz, so we're sitting confortably above the base spec for now. And with all the Windows Mobile 7 phones coming with a 1st-gen Snapdragon (mandatory requirement), it'll be like this for a while.
So there's really not a big need for overclocking the GPU right now, except for getting higher scores in mobile benchmarks (some of them terribly unoptimized, like GLBenchmark 1.1 and 2.0).
Furthermore, I it seems the first factor to limit the 1st-gen Droids in games will be the RAM amount.
The first UE3-based game for Android is already out, and it requires 512MB of RAM.
So the game runs on Nexus One and not on a Droid/Milestone, which has far superior graphics performance.
(I'm pretty sure this has something to do with the fact that Android doesn't allow graphics buffering in the main memory, though, which could be resolved in future firmware revisions).
Then again, overclocking the GPU would be cool, and I'm pretty sure getting our SGX530 to work @ ~200MHz would significantly increase the gaming longevity of our phones for quite a while.
Thanks for your useful and important reply.
"The Manhattan Project" on Galaxy S Series just made me curious about Droid's gpu oc, because SGS also use a PowerVR gpu. But things isn't easy due to a fact that one is made by TI while another is made by samsung, the structure inside both SoCs may be completely different.
But I still hope someone capable would try something on this.
That's really cool and significantly lengthen the lifetime of our Droid and Milestone.
Thx again for your reply!
PS: I also felt strange why the UI(not games) on N1 is faster than an OCed droid, could it be the optimization problem?
Sent from my Milestone using XDA App
TeroZ said:
PS: I also felt strange why the UI(not games) on N1 is faster than an OCed droid, could it be the optimization problem?
Sent from my Milestone using XDA App
Click to expand...
Click to collapse
Definitely part of the optimization --a fast ROM with a good theme like the Droid X theme on the GOT 2.2.1 ROM has as fast a GUI as I've encountered on Android, even without overclock.
Also take in consideration that all the current 2.1 and 2.2 roms have a cap of 30fps in 2D, perhaps when the final 2.2 update arrives there will be some perfomance gain
Sent from my Milestone using Tapatalk

[Q] Tegra Games & Overclocking !?

Is it just me, or don't tegra-optimized games benefit much from overclocking ?
I have Rogue-Kernel 1.3.1 (with GPU OC) running and can definitely say it's working nicely. (Antutu bench @ 1.6Ghz/ performance 7200+ points, @ 1.0Ghz = 5000+)
Playing Riptide or Shadow Gun, I don't really see much of a difference. Just underclocking to <= 400 Mhz produces lower framerates. Obviously these games are heavily GPU-dependant and don't really care about the CPU.
The Rogue-Kernel is overclocking the GPU as well (given you don't install the "N" Version). So is there a way to change settings for the GPU, or is it maybe linked to a "clock-network" like some other chipsets, or is it overclocked automatically - depending on the CPU-speed ?
thanks in advance ...
hm, so all you overclockers are oc-ing for benchmark numbers only

[Q] Galaxy S vs Neo V for gaming? Help !!!

I am going to buy a new smartphone in couple of days.
For the first time i thought of buying Neo V because of its design and bravia engine display with large screen for gaming[ Ray would be little bit small to hold in both hands for gaming so dropped it ].
But,
Now i am confused to choose which phone to buy after seeing Galaxy S even though its on old model it has the same stand in the market.
I actually want to buy smartphone for HD gaming from gameloft and nonGL.
Which mobile supports all HD games
So please tell me which one should i buy....?
I have the SGS (Samsung Galaxy S) and it's a very good phone!
Anyway, for HD gaming is not too good... I don't have the Neo V, but checking the specs I can tell that it ain't very good neither.
I would recommend you to get a Xperia Play ( If you want your smatphone for gaming), is Cheaper than the SGS and has a better GPU.
The good thing about the SGS is that it has a lot of attention from the community (that means a lot of custom ROMs).
If you can get the money get a SGS2 or a Galaxy Nexus.
Cheers
For gaming, Galaxy S has PowerVR SGX 540 graphics is probably one of the best low priced android phones for gaming since iPhone has PowerVR SGX 535 graphics . Because games are natively coded and optimized for PowerVR GPUs, there is none of that "porting" thingie needed to be done.
That means the games are stable, fast and efficient when running on PowerVR graphics. There isn't "lost in quality" as much as regular ports, that unless you're playing it on a HD TV via TV out. Otherwise, WVGA is the limitation to that.
Neo V has Adreno 205, a non-powervr chip and is relativity slower than PowerVR SGX 540. So I wouldn't recommend that for gaming.
what about galaxy r i9103
Nvidia Tegra 2 AP20H
Dual-core 1 GHz Cortex-A9
ULP GeForce
1 GB RAM
I dont know, but if you play so much your battery will be down in 2-3h
Xperia play probably would be good, but the custom rom side of things is a bit of a barren wasteground compared to over here on the SGS forum. There are only 1-2 custom roms that are any good.
Had very mixed results with converting my own PSX games with psxperia, but the dedicated hardware controller is soooo much better than a on-screen one
vamsikrishnach said:
what about galaxy r i9103
Nvidia Tegra 2 AP20H
Dual-core 1 GHz Cortex-A9
ULP GeForce
1 GB RAM
Click to expand...
Click to collapse
As far as I know, Tegra 2 shares many of the similarities to PowerVR architecture. Some graphically intensive games are also optimized for this particular chip and are listed THD.
Lets sum up the pluses:
1) Similar architecture
2)THD optimized games enabling extra detail for ULV Geforce GPUs
So why not?

Intel Chip

Hello everyone
I've been reading for a few days in this forum about the Motorola Razr i
I certainly found interesting articles but strangely I have found that very few refer to the Intel chip
just as I have been looking about the chip of "Motorola Razr i" and found a curious comment
"Medfield z2460 was meant to test the waters and is the reason why it was launched in india and not the US. Just a precursor to the medfield z2580. The z2580 and Clovertrail will be offered in dual core variants (not to mention quad core for clover trail) and will ditch the imagination technologies sgx 540 for an sgx 544 mp2 which runs 34 [email protected] mhz.
The sgx 540 gets 6.4 gfllops @400mhz . The adreno 225 runs at 24.5 [email protected] 400mhz and the tegra 3 (t30l or t33) gpu runs 13 [email protected] mhz. So being that 6.4gflops vs 24.5gflops is relative to 202% what do you think happens with 34 gflops vs 24.5 gflops? Plus the s4 is 103 mflops single threaded while medfield z2460 is 90 mflops single threaded on the cpu side. That's pretty close. Dual core comparison with sgx544 might actually be superior and at a higher process node (32nm vs 28nm), and that's with an in order instruction set vs ARM's out of order. I don't see how you get "x86 Atom has very slim chances when taking on Qualcomm’s ARM processors or any other new generation ARM mobile CPU from Samsung or Nvidia" with that info. Your talking a gpu and a core.
Come spring they go out of order, not to mention ditching 5 year old architecture for silvermont, 22nm process and inclusion of intel hd gpu with 40 to 80 gflops (depending on eu count) and you think there will be no competition? Even the apq8064 adreno 320 only has approx 40-45 gflops but that doesn't include the modem so higher tdp .
Maybe the exynos 5250 with mali [email protected] 68 gflops will be a threat given release schedule but still, nearly matching single threaded performance with the best chip on the market (and with 5 year old architecture), and beating every ARM chip to date in java script for a first try/test the waters offering? Swap a gpu and add a core and its game on. And adding new architecture, 22nm, out of order instruction and hd graphics and ARM might have a problem until 64 bit ARM v8."
Click to expand...
Click to collapse
My question is How true is this?
not publish the link to the site because I do not know if this is right in this forum.
I apologize for possible flaws in my English.
I'm having a super-smooth experience, so yeah, the hyper-threaded single-core chip is doing a very fine job compared to the ARM competitors.
But is it true that Intel will go out-of-order for their next architecture? Because the whole point behind Atom processors was to take advantage of Intel's advanced lithography and well tought architecture, then simplify it to make it consume much less energy (and go from out-of-order to in-order was one of those simplifications).
Well I always thought Intel smartphone chips were more powerful CPU wise but gpu wise its behind.
And christ you can quote all the figures you like but it doesn't mean it'll actually reach that. Its what the individual parts can achieve.
Put them into a phone and reduce the power consumption to an acceptable level = a lot less than quoted figures
Sent from my HTC Desire using xda app-developers app

A little noob question about GPU S advance

Sorry for this question, but I'm very confuse about gpu of I9070,some people said "s advance have single gpu" but on website official novathor u8500 said "multi core gpu process graphics 2d and 3d" So s advance have single or dual gpu ? Thanks http://developer.sonymobile.com/knowledge-base/technologies/novethor-u8500/
S Advance has a single core Mali 400 MP GPU. And as far as I know, the Galaxy S2 also has a Mali 400 MP GPU - but its dual core (instead of just one core). If you read about Mali-400 MP in ARM's website (link), this is what you'll see:
Scalable from 1 to 4 cores the Mali-400 MP enables a wide range of different use cases, from mobile user interfaces up to smartphones, tablets and DTVs, to be addressed with a single IP. One single driver stack for all multi-core configurations simplifies application porting, system integration and maintenance. Multicore scheduling and performance scaling is fully handled within the graphics system, with no special considerations required from the application developer
Click to expand...
Click to collapse
So this shows that different phones can use the same GPU but with different number of cores.
PS: Anyone is free to correct me if I'm wrong.
Sami Kabir said:
S Advance has a single core Mali 400 MP GPU. And as far as I know, the Galaxy S2 also has a Mali 400 MP GPU - but its dual core (instead of just one core). If you read about Mali-400 MP in ARM's website (link), this is what you'll see:
So this shows that different phones can use the same GPU but with different number of cores.
PS: Anyone is free to correct me if I'm wrong.
Click to expand...
Click to collapse
Thanks for answer. It's very stranger because my galaxy S advance runs n.o.v.a 3 smooth and fast (mali400), and I have one tablet based allwinner A13 (mali 400), sometimes I got hard lags ( yes I always optimize my mb ram).

Categories

Resources