Complete Guide to NVIDIA: Everything You Need To Know - NVIDIA

NVIDIA continues its dominance in the graphics card industry. The company’s primary GPU lineup under the GeForce brand has been around for over two decades with close to twenty iterations. The series includes discrete graphics processors for desktops and laptops. Fun fact- the name GeForce originally stood for "Geometry Force" since GeForce 256 was the first GPU for personal computers to calculate the transform-and-lighting geometry.
NVIDIA GeForce generations
Here’s a look at NVIDIA’s GeForce lineup:
GeForce 256
The first GeForce GPU in the lineup, the GeForce 256 (NV10) was launched in September 1999 and was the first consumer-level PC graphics chip that shipped with hardware transform, lighting, and shading.
GeForce 2 series
The following year NVIDIA launched the GeForce2 (NV15) that introduced a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. This was followed by the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a reduced cost.
GeForce 3 series
In 2001 NVIDIA launched the GeForce3 (NV20) which introduced programmable vertex and pixel shaders. A version of GeForce 3 codenamed NV2A was developed for the Microsoft Xbox game console.
GeForce 4 series
In February 2002 the GeForce4 Ti (NV25) was launched as a refinement to the GeForce3. It included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. The GeForce4 MX was also introduced as a budget option based on the GeForce2, with the addition of some features from the GeForce4 Ti.
GeForce FX series
The GeForce FX (NV30) introduced a big change in architecture. It brought support for the new Shader Model 2 specification and carried the 5000 model number, as it was the fifth generation of the GeForce family. The series was also infamous for its heating and noisy fan issues.
GeForce 6 series
Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 and fixed the weak floating point shader performance of its predecessor. It additionally implemented high-dynamic-range imaging, SLI (Scalable Link Interface), and PureVideo capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing).
GeForce 7 series
The GeForce 7 series (G70/NV47) was introduced in June 2005 and was the last NVIDIA GPU series to support the AGP bus. It offered a wider pipeline and an increase in clock speed along with new transparency supersampling and transparency multisampling anti-aliasing modes (TSAA and TMAA). A version of the 7950 GT, called the RSX 'Reality Synthesizer', was used as the primary GPU on the Sony PlayStation 3.
GeForce 8 series
The first GeForce (G80) to fully support Direct3D 10, the 8th-gen GeForce series was launched in 2006. It was made using a 90nm process and built around the new Tesla microarchitecture. It was eventually refined and the die size was shrunk down to 65nm. The revised design codenamed G92 was implemented into the 8 series with the 8800GS, 8800GT, and 8800GTS-512, and was launched in 2007.
GeForce 9 series
Revisions for the GeForce 8 series were introduced after a short period in 2008 where the 9800GX2 used two G92 GPUs, in a dual PCB configuration with a single PCI-Express 16x slot. It also included two separate 256-bit memory busses, one for each GPU and a total of 1GB of memory on the card. Later the 9800GTX was launched with a single G92 GPU, 256-bit data bus, and 512 MB of GDDR3 memory.
GeForce 100 series
The following year NVIDIA launched the GeForce 100 series which were essentially rebranded versions of the GeForce 9 series available only for OEMs, although the GTS 150 was briefly available to consumers.
GeForce 200
The GeForce 200 series included the GT200 65nm-based graphics processor that had a total of 1.4 billion transistors and was introduced in 2008. It was also the year when NVIDIA changed its card-naming scheme by replacing the series number with the GTX or GTS suffix and then adding model numbers after that. The series features the new GT200 core on a 65nm die. The GeForce GTX 260 and the GTX 280 were the first products in the series while the GeForce 310 was released in November 2009 as a rebrand of GeForce 210.
GeForce 300 series
The 300 series cards were launched during the same year and were pretty much rebranded versions of the 200 series with support for DirectX 10.1 and based on the newer Fermi architecture. These were limited to OEMs only.
GeForce 400 series
The GeForce 400 series, codenamed GF100 was introduced in 2010 based on the Fermi architecture. They were the first NVIDIA GPUs to utilize 1GB or more of GDDR5 memory. The GTX 470 and GTX 480 were criticized for their high power use, high temperatures, and loud noise. At the same time, the GTX 480 was the fastest DirectX 11 card.
GeForce 500 series
To fix the issues, NVIDIA brought the 500 series with a new flagship (GTX 580) GPU based on an enhanced GF100 architecture (GF110). It offered higher performance, less power utilization, heat, and noise than the preceding GTX 480. Additionally, the GTX 590 was also introduced that packed two GF110 GPUs on a single card.
GeForce 600 series
In 2010, NVIDIA announced the Kepler microarchitecture, manufactured with the TSMC 28nm fabrication process. The company started supplying their top-end GK110 cores for use in Oak Ridge National Laboratory's Titan supercomputer, leading to a shortage of GK110 cores. Eventually, NVIDIA had to use the GK104 core, which was originally intended for the mid-range segment, to power their flagship, the GTX 680. It was followed by the dual-GK104 GTX 690 and the GTX 670.
GeForce 700 series
In May 2013, NVIDIA announced the 700 series based on the Kepler architecture, although it finally featured a GK110 chipset-based card at the top of the lineup. The GTX 780 was a cut-down version of the GTX Titan that achieved nearly the same performance for two-thirds of the price. A week after the release of the GTX 780, NVIDIA announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 which was also based on the GK104 core and similar to the GTX 660 Ti.
GeForce 800M series
The GeForce 800M series included rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture.
GeForce 900 series
In March 2013, NVIDIA announced the new Maxwell microarchitecture. It was released in September 2014 on the GeForce 900 series and was the last series to support analog video output through DVI-I.
GeForce 10 series
In March 2014, NVIDIA announced that the successor to Maxwell would be the Pascal microarchitecture and was finally introduced on the GeForce 10 series in May 2016. It included 128 CUDA cores per streaming multiprocessor, GDDR5X memory, unified memory, and NVLink.
GeForce 20 series
In August 2018, NVIDIA announced Turing architecture as a successor to Pascal. The new microarchitecture was made to accelerate the real-time ray tracing support and AI Inferencing. It included a new Ray Tracing unit (RT Core) which dedicated processors to the ray tracing in hardware. It also supported the DXR extension in Microsoft DirectX 12. The company also introduced DLSS (Deep Learning Super Sampling), a new form of anti-aliasing that used AI to provide sharper imagery with less impact on performance.
The first GPUs to utilize the architecture were primarily aimed at high-end professionals and were introduced under the Quadro series. Eventually, the GeForce RTX series with RTX 2080 Ti, 2080, and 2070 were announced in 2018 followed by the RTX 2060 in January 2019.
In July 2019, NVIDIA announced the GeForce RTX Super line of cards, a refresh of the RTX 20 series which featured higher-spec versions of the RTX 2060, 2070, and 2080.
GeForce 16 series
In February 2019, NVIDIA announced the GeForce 16 series. Based on the same Turing architecture used in the GeForce 20 series, this series omitted the Tensor (AI) and RT (ray tracing) cores. This series continues to offer a more affordable graphics solution for gamers while still attaining a higher performance compared to respective cards of the previous GeForce generations. Similar to the RTX Super refresh, NVIDIA announced the GTX 1650 Super and 1660 Super cards, in October 2019.
GeForce 30 series
The latest and the most powerful graphics cards from NVIDIA, the new 30-series take over from the 20-series and were announced in 2020. It introduced a massive jump over the predecessor and an excellent price to performance ratio. However, getting your hands on one is a difficult task.
Mobile GPUs
NVIDIA produced a wide range of graphics cards for notebooks as far as the GeForce 2 series, under the GeForce Go branding. Most of the features present in the desktop counterparts were made available on the mobile version. With the introduction of the GeForce 8 series, the GeForce Go brand was discontinued and mobile GPUs were now a part of the main GeForce GPUs, with an M suffix. Once again NVIDIA brought some changes and dropped the M suffix in 2016 with the launch of the laptop GeForce 10 series in an attempt to unify the branding between their desktop and laptop GPU offerings. Currently, the RTX 20, GTX 16 and RTX 30 series of GPUs are available as both desktop and laptop variants. NVIDIA also has the GeForce MX range of mobile GPUs intended for lightweight notebooks with entry-level performance.
Nomenclature
Ever since the launch of the GeForce 100 series NVIDIA has been using the following naming scheme for its products:
G, GT, No Prefix - Mostly user for entry-level category of graphics cards with the last two numbers ranging from 00 to 45. Example - GeForce GT 730, GeForce GT 1030
GTS, GTX, RTX - Mid-range category of graphics cards with the last two numbers ranging from 50 to 65. Example - GeForce GTX 1060, GeForce RTX 2060
GTX, RTX - High-end range of graphics cards with the last two numbers ranging from 70-95. Example - GeForce GTX 1080Ti, GeForce RTX 3090
NVIDIA also uses the ‘Super’ or ‘Ti’ suffixes for its graphics cards to signify incremental updates.

I am having some issues with my GeForce GTX 1050Ti
My motherboard sometimes doesn't detect the card and I am forced to use the Intel inbuild GPU

ahfdee said:
I am having some issues with my GeForce GTX 1050Ti
My motherboard sometimes doesn't detect the card and I am forced to use the Intel inbuild GPU
Click to expand...
Click to collapse
What you do then? Reboot? Wait? Remove and insert into the slot again?

strongst said:
What you do then? Reboot? Wait? Remove and insert into the slot again?
Click to expand...
Click to collapse
Yes I do it multiple times and if I am lucky it starts working again but on the next reboot it stops and the same cycle continues

ahfdee said:
Yes I do it multiple times and if I am lucky it starts working again but on the next reboot it stops and the same cycle continues
Click to expand...
Click to collapse
Could be a mechanical/thermal issue of the card or the PCIe socket if you already tried all software related solutions like BIOS/driver.

ahfdee said:
I am having some issues with my GeForce GTX 1050Ti
My motherboard sometimes doesn't detect the card and I am forced to use the Intel inbuild GPU
Click to expand...
Click to collapse
I was having the same issue with my gt 730 gddr5 card. I found out that the system memory was causing the problem.

Sidgup1998 said:
I was having the same issue with my gt 730 gddr5 card. I found out that the system memory was causing the problem.
Click to expand...
Click to collapse
How did you fix it?

ahfdee said:
How did you fix it?
Click to expand...
Click to collapse
I replaced my bad memory stick and voila the issue was fixed!!!

Sidgup1998 said:
I replaced my bad memory stick and voila the issue was fixed!!!
Click to expand...
Click to collapse
My RAM has no issues
Any other solutions

ahfdee said:
My RAM has no issues
Any other solutions
Click to expand...
Click to collapse
Did you check the card on another motherboard?

Sidgup1998 said:
Did you check the card on another motherboard?
Click to expand...
Click to collapse
I am not able to

ahfdee said:
I am not able to
Click to expand...
Click to collapse
Try cleaning the slot with some Isopropyl alcohol and a q-tip (make sure you dont leave any fluff behind)
Do you have another PCIe slot available to try?

@kunalneo Thx for the summary.
Was searching for Nvideas supporting UEFI and can be used in Linux Mint as well but didn't find any info yet.
Is there a date from which on they generally do?

ahfdee said:
My RAM has no issues
Any other solutions
Click to expand...
Click to collapse
i sent back a pc for a bad ram stick, but it was really hard to find, passed all dianostics, blue screens with all different errors, found the bad stick by taking one out and running for a while, did great, then swapped sticks out, and wouldnt boot, shipped the next day(i had already contacted the seller and got the return approved). but i got a better system for pretty much the same price

WillisD said:
i sent back a pc for a bad ram stick, but it was really hard to find, passed all dianostics, blue screens with all different errors, found the bad stick by taking one out and running for a while, did great, then swapped sticks out, and wouldnt boot, shipped the next day(i had already contacted the seller and got the return approved). but i got a better system for pretty much the same price
Click to expand...
Click to collapse
I think my GPU has Thermal issues
When I boot from the GPU's HDMI slot after 3 to 4 days of not using the PC the GPU works. Do a need to put some thermal paste?

i'm no expert but if you had thermal issues, they wouldn't show after 3 or 4 days idle, and get msi afterburner and watch temps while using, for thermal shutdown you'd need to be at 100C or higher. Are you on winblows or linux?
Either way do a clean install of drivers and reset nvidia settings. How do you boot from an hdmi slot?

WillisD said:
i'm no expert but if you had thermal issues, they wouldn't show after 3 or 4 days idle, and get msi afterburner and watch temps while using, for thermal shutdown you'd need to be at 100C or higher. Are you on winblows or linux?
Either way do a clean install of drivers and reset nvidia settings. How do you boot from an hdmi slot?
Click to expand...
Click to collapse
I can either boot from Nvidia HDMI slot or default Intel HDMI slot.
To boot from either of the slots i just take out the HDMI cable from one slot and to it into another one

ahfdee said:
I think my GPU has Thermal issues
When I boot from the GPU's HDMI slot after 3 to 4 days of not using the PC the GPU works. Do a need to put some thermal paste?
Click to expand...
Click to collapse
I typically redo my thermal paste about once a year.

Anybody got a list which ones support GOP?

kunalneo said:
NVIDIA continues its dominance in the graphics card industry. The company’s primary GPU lineup under the GeForce brand has been around for over two decades with close to twenty iterations. The series includes discrete graphics processors for desktops and laptops. Fun fact- the name GeForce originally stood for "Geometry Force" since GeForce 256 was the first GPU for personal computers to calculate the transform-and-lighting geometry.
NVIDIA GeForce generations
Here’s a look at NVIDIA’s GeForce lineup:
GeForce 256
The first GeForce GPU in the lineup, the GeForce 256 (NV10) was launched in September 1999 and was the first consumer-level PC graphics chip that shipped with hardware transform, lighting, and shading.
GeForce 2 series
The following year NVIDIA launched the GeForce2 (NV15) that introduced a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. This was followed by the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a reduced cost.
GeForce 3 series
In 2001 NVIDIA launched the GeForce3 (NV20) which introduced programmable vertex and pixel shaders. A version of GeForce 3 codenamed NV2A was developed for the Microsoft Xbox game console.
GeForce 4 series
In February 2002 the GeForce4 Ti (NV25) was launched as a refinement to the GeForce3. It included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. The GeForce4 MX was also introduced as a budget option based on the GeForce2, with the addition of some features from the GeForce4 Ti.
GeForce FX series
The GeForce FX (NV30) introduced a big change in architecture. It brought support for the new Shader Model 2 specification and carried the 5000 model number, as it was the fifth generation of the GeForce family. The series was also infamous for its heating and noisy fan issues.
GeForce 6 series
Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 and fixed the weak floating point shader performance of its predecessor. It additionally implemented high-dynamic-range imaging, SLI (Scalable Link Interface), and PureVideo capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing).
GeForce 7 series
The GeForce 7 series (G70/NV47) was introduced in June 2005 and was the last NVIDIA GPU series to support the AGP bus. It offered a wider pipeline and an increase in clock speed along with new transparency supersampling and transparency multisampling anti-aliasing modes (TSAA and TMAA). A version of the 7950 GT, called the RSX 'Reality Synthesizer', was used as the primary GPU on the Sony PlayStation 3.
GeForce 8 series
The first GeForce (G80) to fully support Direct3D 10, the 8th-gen GeForce series was launched in 2006. It was made using a 90nm process and built around the new Tesla microarchitecture. It was eventually refined and the die size was shrunk down to 65nm. The revised design codenamed G92 was implemented into the 8 series with the 8800GS, 8800GT, and 8800GTS-512, and was launched in 2007.
GeForce 9 series
Revisions for the GeForce 8 series were introduced after a short period in 2008 where the 9800GX2 used two G92 GPUs, in a dual PCB configuration with a single PCI-Express 16x slot. It also included two separate 256-bit memory busses, one for each GPU and a total of 1GB of memory on the card. Later the 9800GTX was launched with a single G92 GPU, 256-bit data bus, and 512 MB of GDDR3 memory.
GeForce 100 series
The following year NVIDIA launched the GeForce 100 series which were essentially rebranded versions of the GeForce 9 series available only for OEMs, although the GTS 150 was briefly available to consumers.
GeForce 200
The GeForce 200 series included the GT200 65nm-based graphics processor that had a total of 1.4 billion transistors and was introduced in 2008. It was also the year when NVIDIA changed its card-naming scheme by replacing the series number with the GTX or GTS suffix and then adding model numbers after that. The series features the new GT200 core on a 65nm die. The GeForce GTX 260 and the GTX 280 were the first products in the series while the GeForce 310 was released in November 2009 as a rebrand of GeForce 210.
GeForce 300 series
The 300 series cards were launched during the same year and were pretty much rebranded versions of the 200 series with support for DirectX 10.1 and based on the newer Fermi architecture. These were limited to OEMs only.
GeForce 400 series
The GeForce 400 series, codenamed GF100 was introduced in 2010 based on the Fermi architecture. They were the first NVIDIA GPUs to utilize 1GB or more of GDDR5 memory. The GTX 470 and GTX 480 were criticized for their high power use, high temperatures, and loud noise. At the same time, the GTX 480 was the fastest DirectX 11 card.
GeForce 500 series
To fix the issues, NVIDIA brought the 500 series with a new flagship (GTX 580) GPU based on an enhanced GF100 architecture (GF110). It offered higher performance, less power utilization, heat, and noise than the preceding GTX 480. Additionally, the GTX 590 was also introduced that packed two GF110 GPUs on a single card.
GeForce 600 series
In 2010, NVIDIA announced the Kepler microarchitecture, manufactured with the TSMC 28nm fabrication process. The company started supplying their top-end GK110 cores for use in Oak Ridge National Laboratory's Titan supercomputer, leading to a shortage of GK110 cores. Eventually, NVIDIA had to use the GK104 core, which was originally intended for the mid-range segment, to power their flagship, the GTX 680. It was followed by the dual-GK104 GTX 690 and the GTX 670.
GeForce 700 series
In May 2013, NVIDIA announced the 700 series based on the Kepler architecture, although it finally featured a GK110 chipset-based card at the top of the lineup. The GTX 780 was a cut-down version of the GTX Titan that achieved nearly the same performance for two-thirds of the price. A week after the release of the GTX 780, NVIDIA announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 which was also based on the GK104 core and similar to the GTX 660 Ti.
GeForce 800M series
The GeForce 800M series included rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture.
GeForce 900 series
In March 2013, NVIDIA announced the new Maxwell microarchitecture. It was released in September 2014 on the GeForce 900 series and was the last series to support analog video output through DVI-I.
GeForce 10 series
In March 2014, NVIDIA announced that the successor to Maxwell would be the Pascal microarchitecture and was finally introduced on the GeForce 10 series in May 2016. It included 128 CUDA cores per streaming multiprocessor, GDDR5X memory, unified memory, and NVLink.
GeForce 20 series
In August 2018, NVIDIA announced Turing architecture as a successor to Pascal. The new microarchitecture was made to accelerate the real-time ray tracing support and AI Inferencing. It included a new Ray Tracing unit (RT Core) which dedicated processors to the ray tracing in hardware. It also supported the DXR extension in Microsoft DirectX 12. The company also introduced DLSS (Deep Learning Super Sampling), a new form of anti-aliasing that used AI to provide sharper imagery with less impact on performance.
The first GPUs to utilize the architecture were primarily aimed at high-end professionals and were introduced under the Quadro series. Eventually, the GeForce RTX series with RTX 2080 Ti, 2080, and 2070 were announced in 2018 followed by the RTX 2060 in January 2019.
In July 2019, NVIDIA announced the GeForce RTX Super line of cards, a refresh of the RTX 20 series which featured higher-spec versions of the RTX 2060, 2070, and 2080.
GeForce 16 series
In February 2019, NVIDIA announced the GeForce 16 series. Based on the same Turing architecture used in the GeForce 20 series, this series omitted the Tensor (AI) and RT (ray tracing) cores. This series continues to offer a more affordable graphics solution for gamers while still attaining a higher performance compared to respective cards of the previous GeForce generations. Similar to the RTX Super refresh, NVIDIA announced the GTX 1650 Super and 1660 Super cards, in October 2019.
GeForce 30 series
The latest and the most powerful graphics cards from NVIDIA, the new 30-series take over from the 20-series and were announced in 2020. It introduced a massive jump over the predecessor and an excellent price to performance ratio. However, getting your hands on one is a difficult task.
Mobile GPUs
NVIDIA produced a wide range of graphics cards for notebooks as far as the GeForce 2 series, under the GeForce Go branding. Most of the features present in the desktop counterparts were made available on the mobile version. With the introduction of the GeForce 8 series, the GeForce Go brand was discontinued and mobile GPUs were now a part of the main GeForce GPUs, with an M suffix. Once again NVIDIA brought some changes and dropped the M suffix in 2016 with the launch of the laptop GeForce 10 series in an attempt to unify the branding between their desktop and laptop GPU offerings. Currently, the RTX 20, GTX 16 and RTX 30 series of GPUs are available as both desktop and laptop variants. NVIDIA also has the GeForce MX range of mobile GPUs intended for lightweight notebooks with entry-level performance.
Nomenclature
Ever since the launch of the GeForce 100 series NVIDIA has been using the following naming scheme for its products:
G, GT, No Prefix - Mostly user for entry-level category of graphics cards with the last two numbers ranging from 00 to 45. Example - GeForce GT 730, GeForce GT 1030
GTS, GTX, RTX - Mid-range category of graphics cards with the last two numbers ranging from 50 to 65. Example - GeForce GTX 1060, GeForce RTX 2060
GTX, RTX - High-end range of graphics cards with the last two numbers ranging from 70-95. Example - GeForce GTX 1080Ti, GeForce RTX 3090
NVIDIA also uses the ‘Super’ or ‘Ti’ suffixes for its graphics cards to signify incremental updates.
Click to expand...
Click to collapse
All GTX is better from GT model, even number is big ! for example GTX 750 is better from GT 1030 also Ti is Strong ..... i use and test computer Hardware and Software for About 30 years ago... i Start from Commodor 64 that must save data(Gw basic) to type drive and then use Spectrom 128 that have 128 k/byte memory
then my first PC is 286 with 10 m byte HDD and 2 m byte RAM with Dos 6.22 OS then i use first windows(windows 3.1)
with my Experience better VGA for game and render is any VGA that have more CUDA core, more RAM BUS Bandwidth and have more ROP/TMU=>(special for Render and movie mixing)
VGA core and ram upper frequency Affection to Speed is lower But more CUDA Core and bigger RAM BUS Bandwidth and more ROP and TMU is Premier for VGA Card, Specially for Nvidia VGA Card.
Sorry for my poor English

Related

[Q] Galaxy note showing Cortex A7 processor in quadrant

Quadrant is showing that processor is cortex A7 but it has to be A9.
Antutu is showing me same A7.
Can anybody check and confirm?
Same here. ARMv7 Processor rev1 (v7l)
But A7 supports upto 1.2Ghz
It has to be A9.
There is no cortex A7, its and ARMv7 CPU, all Cortex A chips are ARMv7.
Oh thanks for the clarification
sangalaviral said:
Quadrant is showing that processor is cortex A7 but it has to be A9.
Antutu is showing me same A7.
Can anybody check and confirm?
Click to expand...
Click to collapse
And one more thing you dint noticed i guess, sometimes it shows only "Cores=1", that means Quadrant Standard is not ready for Note yet.
everton86 said:
There is no cortex A7, its and ARMv7 CPU, all Cortex A chips are ARMv7.
Click to expand...
Click to collapse
Not true anymore.
There's ARM 1-10
ARM11 (HTC TouchPRO, Google G1, iPhone 3G, Motorola CLIQ)
ARM Cortex A8 (Motorola DROID, iPhone 3GS, iPhone 4, Samsung Infuse 4G)
ARM Cortex A9 (Motorola Atrix, RIM PlayBook, iPhone 4S, Galaxy NOTE)
ARM Cortex A15 (the successor to the Cortex A9, provides double the performance at the same battery drain, and can even be clocked from the standard 1GHz upto 2GHz)
ARM Cortex A7 (the new Cortex A8 solution for cheaper or low-powered devices. Derived by using smaller nm components based on the Eagle architecture but underclocked for better battery savings)
...ARM's big.LITTLE Computing = 2xCortex A15 (2GHz) + 1xCorte A7 (1GHz)
= Quadruple the power of standard Cortex A9 SoC's quoted with the same battery life.
Source: http://www.arm.com/products/processors/cortex-a/cortex-a7.php

iBorow what i can :))) samsung quadcore hits the new iphone

http://www.gsmarena.com/new_iphone_to_supposedly_feature_quadcore_exynos_4-news-4482.php
in ur face iphone fanatics???))
iphones have had samsung chips for a long while.
...iphones components are made by all hte Android manuf.
I'm just most excited about the Qualcomm APQ8064A chipset, "which will integrate an LTE radio and the Adreno 320 GPU all into a 28nm process"

Will this have the S4

I haven't seen this confirmed anywhere. All the specs are general in terms of the processor. They only mention dual core snapdragon.
Yes it does I have one . Works great
Pro? Prime? what version?
Pro
i think its a pro version coz it has adreno 225 graphics i think prime has adreno 320 graphics( which is super fast)!!!
MSM8960 with Adreno 225 GPU
Snapdragon S4 Krait
1.5Ghz Dual-Core
This is the reason why we have a smaller battery. This chip consumes far less power than what is available. In some cases it beat the Quad core Tegra3. For the most part the Tegra3 was faster, but the S4 is a pretty fast chip. 28nm

S10 not Exynos

I read the article about the Sprint S10 5g being Exynos however, going to the links in the article and also doing my own personal search as a potential buyer i saw nothing about Exynos for that particular device from Sprint. all I see is SDM 855 which is SnapDragon. here are the specs.
Battery information : 4500mAh Li-Ion Polymer
Bluetooth profiles : A2DP, AVRCP, DI, HFP, HID, HOGP, HSP, MAP, OPP, PAN, PBAP
Dimensions : 6.4" x 3.0" x .031"
Display : 6.7" Quad HD+ Dynamic AMOLED Infinity Display, 3040x1440
Keyboard : Capacitive
Memory : Internal Memory: 8GB RAM / 256GB ROM
Operating system : Android 9.0
Processor : SDM 855 + SDX50M, Octa Core, Single 2.8GHz Triple 2.4GHz Quad 1.7GHz
Camera : Four Rear Cameras 12MP/16MO/12MP (3D Area) & Dual 10 MP + 8MP UHD Front Cameras
Talk time : Up to 44 hours
Weight : 6.98 oz
here is the link to the sprint website.https://www.sprint.com/en/shop/cell-phones/samsung-galaxy-s10-5g.html

Samsung Galaxy Watch 3 chipset

THIS is my problem with the new Galaxy Watch 3.
I don't really feel that there is anything new on this "new" galaxy watch 3. basically, the CPU, Clock speed, GPU, everything is the same. However, it has a newer version of the Tizen OS and bit more RAM, but that doesn't really give me anything new on this watch.
What do you guys think?
Samsung Galaxy Watch - Aug. 2018
Chipset Exynos 9110
Clock Speed Dual-Core @ 1.15 GHz
Cores Dual-Core
GPU Mali-T720
Samsung Galaxy Watch Active - Feb. 2019
Chipset Exynos 9110
Clock Speed Dual-Core @ 1.15 GHz
Cores Dual-Core
GPU Mali-T720
Samsung Galaxy Watch Active 2 - Sept. 2019
Chipset Exynos 9110
Clock Speed Dual-Core @ 1.15 GHz
Cores Dual-Core
GPU Mali-T720
Samsung Galaxy Watch 3 - Aug. 2020
Chipset Exynos 9110
Clock Speed Dual-Core @ 1.15 GHz
Cores Dual-Core
GPU Mali-T720
INFO is taken from SamMobile
As they not change Screen Resolution.. still tiny 360 x 360 Pixel... correct me...
So it makes maybe not really sense to put new faster CPU inside this tiny Watch...
GW3 is IMHO first Watch from Samsung with 8 GB eMMC...
No idea if for everyone usefull... but hey. Bigger Storage.
I can remember they told on Unpack event...
Watch Faces great Watch Faces...
And for Watch Faces this CPU is fast enough...
Maybe if more and better apps are allowed on Store...
Best Regards
New TI chip / light sensors.
What benefit would there be in upgrading the chipset? It would only raise the cost of this already expensive watch. It's Tizen, it's a smartwatch, it doesn't need improvement in that area.
The differences lay in a better screen, slimmer lighter design, more storage and new health sensors. If these aren't important to you then stick with what you have, but don't reject it just because they haven't upgraded something that has no benefit to the watch performance.
.
The difference with the Watch3 is in sensors, weight, size. It now can measure O2, blood pressure (future), EKG and do more complete sleep monitoring, as well as exercise monitoring and analysis. Yes it has the same processor and 25% less battery.
apprentice said:
What benefit would there be in upgrading the chipset? It would only raise the cost of this already expensive watch. It's Tizen, it's a smartwatch, it doesn't need improvement in that area.
The differences lay in a better screen, slimmer lighter design, more storage and new health sensors. If these aren't important to you then stick with what you have, but don't reject it just because they haven't upgraded something that has no benefit to the watch performance.
.
Click to expand...
Click to collapse
Thanks for the feedback. Actually I recently shifted over to Android and I don't have. smart watch yet. Im looking for one. Apple usually upgrade their chipset so I was only assuming that other vendors might do the same.
jgrobertson said:
The difference with the Watch3 is in sensors, weight, size. It now can measure O2, blood pressure (future), EKG and do more complete sleep monitoring, as well as exercise monitoring and analysis. Yes it has the same processor and 25% less battery.
Click to expand...
Click to collapse
Which one would you recommend if I was looking for a watch with primarily health features. Someone recommended me the Garmin Venu or the Garmin Fenix 6 series. What do you guys say?

Categories

Resources