Computer graphics card and its purpose
A video card is the device with which the image is displayed on the monitor. Those. without a video card, neither the text nor the images we see on the screen, and in general, the work of a computer without a video card is impossible.
There are two types of video cards: external video cards (discrete) and built-in (onboard from English "On board" - on the board). Let's take a closer look at these concepts together. On the market of external video cards there are now two monopolists. This is the California company "Nvidia" and the Canadian "ATI Technologies". In 2006, the latter was also bought out by the American company for the production of "AMD" processors (Advanced MIcro Devices). Now AMD is actively selling its graphics accelerators under the untwisted ATI, at one time, the brand "Radeon".
Here's how an external (discrete) entry-level graphics card from ATI-AMD can look:
The numbers in the photo are:
- Radiator with cooling fan for GPU
- The PCI-Express connector with which the card is inserted into the motherboard
- (3-4) Video outputs VGA and DVI respectively (now there is a massive transition to a new digital standard - HDMI)
Note: The GPU (Graphics Processing Unit or graphics processing unit) is the graphics processor itself (its core).
Let's take a closer look: the core of the video card (in some simplification) is the same chip as the core of the CPU. Only it deals with its specific tasks - the output of the image (any) on the user's screen. Starting from the output of the text and ending with the processing of three-dimensional scenes of your favorite computer game.
The GPU chip in the factory (with the help of the BGA mounting method) is soldered to the textolite of the PCB of the external video card (red "plastic" in the figure above), on top of it is a dense (or glued or screwed) radiator for heat dissipation and already on the heatsink itself - A fan (cooler - cooler) for dissipating hot air. As you can see, structurally all this is very similar to the processor with its system cooling.
The external video card is connected to the motherboard via a dedicated slot (slot). For each generation of video cards, it's your own (depending on the time of issue of the card).
This statement is true for almost the entire range of computer components. In this case, here you can easily define yourself purely visually "by sight". The first video cards were installed in the ISA (Industry Standard Architecture) connectors, replaced by a PCI connector (Peripheral Component Interconnect), then AGP (Accelerated Graphics Port) and now we have a bus and, respectively, the PCI-Express slot. And its various versions (revisions) are physically and electrically compatible, but differ only by the bandwidth (width) of the data bus connecting the video card and the motherboard.
Via the video outputs on the card, it is physically connected to the monitor (or TV). Here, again, it all depends on the standard supported by your monitor / TV.
For example, the VGA connector (number 1 in the photo above) transmits the signal in analog form. The DVI standard (digit 2) implies exclusively digital signal transmission (without additional transformations). The new HDMI standard, along with the image over the cable, can also transmit audio. The abbreviation HDMI stands for "High Definition Multimedia Interface" - a high-definition multimedia interface (high-quality).
I think the main idea you caught: we look at what we have (the presence of which outputs), check with what we want to connect to the subject "will not work," we think the availability of what connectors we may need in the future - we take "to grow" :) And, of course, do not forget about the adapters with which the external video card can connect to the image output device: DVI-VGA or HDMI-VGA (no longer an adapter, but a full controller-video converter).
I would like to say a few words about modern video cards, which require additional power for work, which is fed directly to them from the computer power supply.
The situation here is the following: any external video card for the computer (be it PCI, AGP or PCI-Express) is powered directly from the connector of the motherboard into which it is installed. For example, the maximum wattage that the AGP connector can provide is 42 W, PCI-Express versions 1.1 - 76 W (W), respectively.
As you understand, many modern external video cards (especially at maximum loads) consume much more power. It is because of this for them and came up with an additional power connector.
Here's how these connectors look:
Comes with such a graphics accelerator there must be a special adapter that connects one of the standard "molex" connectors of the power supply and an additional power input of the card.
With modern power supplies, these additional connectors are present from the very beginning, so you do not have to worry about adapters, but you need to know about this possibility! Modern graphics accelerators (especially for products from AMD) can consume a fair amount of energy (under 250 watts), so keep this in mind when choosing the GPU and the power supply of the computer that will have to provide all this chic :)
If the computer is supposed to be used primarily for gaming or working with 3D graphics, you can purchase a motherboard with several Pci-Express x16 connectors for graphics accelerators. This will allow simultaneous use of several external video cards of the same model. That in the sum will give a tangible increase in the performance of the graphics subsystem.
With Nvidia, this proprietary technology is called "SLI" (Scalable Link Interface) and for cooperative work we need absolutely identical external video cards, its direct competitor has a similar development called "AMD CrossFire" and here can Use any graphics accelerators that support this technology. Both of these developments allow you to combine the power of several external video cards and make them work as a single unit.
If you decide to organize something similar, keep in mind that some motherboards can be "sharpened" only under one of these technologies. There are universal and this point is not too much to clarify when buying. Although, for the sake of justice: is it really necessary to unite video cards for games? I mean that one external video card of the last generation can cope with any modern game.
In the second part of the article I would like to share my experience of upgrading an external video card on my home computer. What is interesting about this? I just thought that in the course of the story some new concepts and descriptions will appear that can organically supplement and deepen the topic of this article. If not, I simply boast of the acquisition! :)
So, before the upgrade, I had a card from Nvidia: "Asus GeForce 9600 GT" That's how it looked:
Great map, by the way! Hardware support for DirectX10, 512 megabytes of video memory, for that time - very worthy! This is my first external video card, which required additional power (6-pin connector on the right). At peak load, the TDP card could approach 96 watts, and as we recall, the Pci Express connector can provide only 76, so without additional power there is nothing! A new (at that time) 65 nanometer process technology. The video card looked pretty impressive even in my small computer case. I'll say this: until this moment I have not indulged myself with "toys" of this class :) And additional food ... I personally reassured myself that modern graphics adapters, apparently, simply can not do without it.
Note: TDP - "Thermal Design Power" or "Thermal Design Point" - roughly translates as design requirements for heat sink. In other words, the expected dissipated power (the amount of heat allocated or scattered per unit of time is a second). The figure indicates the withdrawal of what thermal power should be calculated cooling system of the product.
But everything flows, everything changes, and in the computer industry very quickly, so it's time to replace this model of an external video card with a newer one. I decided again on NVIDIA products, "GeForce GTX 750 Ti", which was based on the new technology of the company called "Maxwell". The hardware-based graphics card supports the DirectX11.2 version, it has two gigabytes of fast RAM for the GPU standard GDDR5 (Graphics Double Data Rate), the chip itself is made using 28 nanometer process technology (the size of one crystal transistor = 28 nanometers). In short, all the games of 2015 are launched with it on medium-high settings.
Note: a nanometer is one billionth of a meter.
Why do I dwell on such details in such detail? Simply, I want to show you how comparable (in terms of performance) these two graphics cards of different generations are to each other. And this is logical: on the "GTX 750 Ti", those games that quickly can not even launch on the "GT 9600" quickly work, but the beauty here is not only that :) Look at the new video card in the reference design:
Note: The reference design is a printed circuit board and a cooling system developed by the manufacturer (in this case, Nvidia). Partners (Asus, MSI, Gigabyte) having their own production capacities, can place a graphics chip on a board with a set of other components and install their cooling system.
Just visually compare these two external video cards (you can easily navigate through the parts advocating for the PciExpress slot) and you will all understand:
- The novelty does not have any additional power (just not all!) :)
- A full-fledged modern graphics core based on the NVIDIA GM107 chip
- Two gigabytes of fast GDDR5 memory
- The card is placed in any system unit, since in total it has 159 millimeters!
- The reference cooling system is enough to dissipate the heat generated by the graphics card: its TDP is declared at 60 Watts. Yes I tell you, look again at this cooler in the photo above :)
- As a result of the addition of the second and third items - excellent performance in the games of 2015
- The minimum recommended power supply for a PC with this video card is 300 watts (for comparison: recommendations for my previous GT 9600 is 400 watts!) The device consumes only 60 watts of power ... Stunned :)
This is the new architecture of Maxwell in all its glory! On the assurances of the manufacturers themselves, the chips released on its basis consume, in comparison, with the graphics processors of the last generation, half the energy, with an increase in performance of almost 300%.
Of course, as the saying goes: "every sandpiper praises its marsh", but even with the naked eye you can see that this time the engineers of "Nvidia" really have something to be proud of!
Before that, I confess, I thought that the industry of game graphics cards is clearly not following that path. The ever-increasing TDP (power consumption) of the products began to lead to the fact that the power unit of 750 watts will not surprise anyone, and in various forums already flashed posts like: "I bought myself a BP killer, I think there will be no problems!" Of course! What problems?! Just for the light you will pay a lot (as if a small electric kettle is constantly plugged into a power outlet), and so - no problem :)
To demonstrate more clearly what was the result of the constant race of capacity building and the pursuit of "parrots" (the number of points or the rating of the accelerator in testing programs like "3DMark"), I will show a few photos. Below is the "AMD-Radeon-R9-290X" (at a peak load only the video card itself can consume up to 250 watts).
As you can see, the external video card already lacks the design of a conventional cooling system and uses its improved version of the turbine type: the fan "pumps" the air through the cooling system radiator and then it is thrown out of the housing through special holes on the back of the card. To experience the whole "tragedy" of the situation, let's look at one more representative of the AMD family of cards - "Radeon-R9-295X2":
For this dual-processor graphics accelerator TDP is 500 watts! Disperse (tongue does not turn to say "heat" - heat) of this "stove" is called a combined liquid cooling system. Yeah, let's run another nitrogen (it also happens liquid) :) That's how all this disgrace looks in the cut:
As you can see, the company "Nvidia" was the first to think about it. Instead of stupidly increasing the megahertz and the size of their external video cards, they went along the path of serious optimization of existing solutions. The result was brilliant! Reduced heat dissipation and power consumption with a noticeably increased performance - brave, "Nvidia"! The situation reminded me of a breakthrough that Intel made, releasing its Core 2 Duo to the market of central processors. Then its competitor (the same company AMD) was forced to urgently seriously reduce the price of a number of its products, as objectively could not stand competition with the new technology.
Of course, it's possible that AMD will also come to its senses and will soon cease to turn personal computers into users like Martin, but there is little faith in this. I think, as always, just the prices will be lowered: you see, in any case, we are the winner! :)
So, let's talk with you a little about what is so good about Maxwell technology? First of all, it opens up a line of energy-efficient solutions from Nvidia for mobile systems (tablets, smartphones, gaming consoles, laptops, etc.). If there is a desire, read about mobile processors "NVIDIA Tegra" - very interesting!
The company's policy, at this stage, is to develop and test the solution for mobile devices, and then scale it for high-performance desktop systems. As a result, all energy-saving technologies migrate from mobile solutions to productive graphics accelerators, the size of the latter decreases, and the performance increases. If you remember, during the rapid development of the Internet, the first computers were also very large, and now we can place them on their laps or hold them in the palm of their hand :)
For example, an external video card on the Maxwell architecture will automatically reduce its power consumption to a minimum while you are viewing a movie on your computer or working in a text editor or vice versa, will raise the core frequency above the nominal value when it is necessary for a demanding application (Nvidia GPU Boost technology ).
Going back to the "new thing" :) I purchased an external video card from MSI. This is the same GM107 chip on the Maxwell, but mounted on a more massive circuit board with a modified (with reference design) element base and a cooling system using copper heat pipes. The full name of the video card looks like this: "GeForce GTX-750-TI-OCV1", its photo below:
How does it differ from the reference design? First of all, the components of increased reliability. This is due to the fact that in the name of the accelerator there is such an abbreviation - OCV1. What exactly is V1 I'm not sure (maybe version 1?), But here OS speaks volumes. Namely, that this external video card comes with an overclocked (OverClocking) version of the chip. Naturally, to ensure stable operation, it needs a different (more reliable) component base of components. Hence the increase in the length of the board and the larger size of the fan.
All this while maintaining the benefits of the new technology listed above. For example, no matter how hard I tried, I could not heat my card with the FurMark program even up to 70 degrees Celsius (without load, its temperature in the height of summer was about 31 degrees!) Yes, and there is no additional power on the video card, the truth there was a place for installation of a socket under it, but as superfluous it and was not decoupled.
If the reference clock "GTX 750 Ti" has a base clock frequency of 1020 MHz, and with the use of the technology of its adaptive boost (Boost) - 1085 MHz, then in the OverClicking (OC) solution it is initially 1059MHz, and at boost - 1137MHz . There is an even more extreme version of the same MSI (the famous series with a dragon). The solution for this model is called "GTX 750 Ti Gaming Edition" (game edition).
As you can see, everything is much more serious here: the frequencies are further increased, the cooling system is completely redesigned, but it is, in fact, all the same "crumbs" that we considered in the middle of this article :)
In principle, I am against any (even factory) overclocking! Theoretically, any overclocking reduces the overall "life" of the device. And this is logical: he has to work all the time in not regular conditions. I'll give you another example: the well-known computer game "The Witcher 3" from the Polish studio "CD Projekt" regularly gave out, as gamers say, "friezes" - a complete fading of the picture for a few seconds, after which everything came back to normal. Very annoying!
Помогла мне решить проблему утилита для тонкой настройки графических адаптеров семейства «Nvidia», которая называется «NVIDIA Inspector». Саму программу можно скачать с нашего сайта. Интерфейс разделен на два окна: в левой части отображаются текущие значения основных параметров устройства, а правая предназначена для их тонкой настройки.
Нас будет интересовать кнопка «Show Overclocking» (показать настройки разгона), расположенная в правом нижнем углу. После нажатия на нее появится вот такое окно предупреждения:
В нем говорится, что все манипуляции, связанные с разгоном видеокарты мы совершаем на свой страх и риск и если она после этого перестанет работать, то разработчик софта не несет за это никакой ответственности! Солидарен с разработчиком :)
Просто нюанс состоит в том, что мы будем не увеличивать (разгонять) частоту графического ядра, а уменьшать ее! Нажимаем «Да», подтверждая тем самым что мы адекватные пользователи и сам себе админы. После этого нам дают доступ ко второму окну утилиты. Посмотрите на фото ниже:
Note the value of "Base Clock Offset" (offset relative to the base value) on the right side of the program window. The value "-50" MHz is set there. It is minus fifty! Those. we reduced the frequency of the graphics core by 50 Megahertz. After manipulating the values, do not forget to click on the confirmation button: "Appy Clocks & Voltage".
Note: At any time, you can return to the default values by clicking on the "Apply Defaults" button.
Look at the left side of the window and the frequency values highlighted in red: "GPU Clock" and "Default Clock". The first value shows the current frequency of the graphics adapter, and the second frequency is the default (factory setting). Two other numbers opposite the general name "Boost" show the frequency of the GPU in the mode of its regular increase. Also current (top) and factory (bottom) indicator.
As you can see, both lines show a frequency decrease of 50 Megahertz - just as much as we indicated on the right. I do not know why, but more "The Witcher 3" is not "frisite" and I can safely play it. So overclocking is not always a blessing!
Note: after you close the program, all the changes made are lost. If you need them, just fold the application to the tray.
As an alternative and just for displaying information about your 3D accelerator, I can recommend a free program "GpuZ". It shows very detailed information about the GPU, on its second tab "Sensors" you can see the statistics of kernel and memory usage, fan speed, chip temperature, etc.
Finally, I would like to give some trivial advice to those who buy video cards for games, because it's a sin to hide, we all love to sometimes "gambol" in a new shooter or "hover" in the campaign strategy. First of all, an external video card for games can not be performed on passive cooling (without a fan). In the names of such solutions is usually present the word Silent (quiet) and they are suitable for office computers, but not for gaming.
Regularly update the driver to your graphics accelerator - it really helps. I remember that only this procedure was able to significantly improve the situation with the FPS subsidence (Frame Per Second) in one game.
Also try to buy products of well-proven manufacturers. With regard to video cards, it could be all the same "AMD", "Asus", "Gigabyte", "MSI". I remember I had a great card from "Sapphire" for a long time - I can not say anything bad, or was it so lucky to me? On this for now, everyone, watch our other articles.