Loading…

Nvidia has acquired Arm. What does this mean for the future of AI, edge computing, and the people who write software for these chips?

Article hero image

In a move that has significant implications for the tech industry, U.S.-based graphics chip maker Nvidia announced last week that it would purchase U.K.-based Arm Holdings from Japanese investment firm Softbank for $40 billion. For anyone programming for AI, data processing, or embedded systems, this could mean your data intensive applications will soon be running on ARM designed chips with native Nvidia GPU support.

Arm Holdings is the enterprise behind Arm Processors, the smart sensor chips which are used to power over 90% of the world’s smartphones and everything from autonomous vehicles to toasters to washing machines. While the company has no manufacturing capabilities, it describes itself as the “Switzerland” of technology: it licenses its chip designs to any company who wants them, and allows others to do the actual manufacturing.

Under the terms of the new deal, Arm will continue to be headquartered in Cambridge as a UK-based company Nvidia has announced that they will be opening an AI research center in Cambridge as well near the Arm headquarters. The research center will serve as a central hub for collaboration between AI scientists and researchers from across the globe.

So, what does this mean for our readers, the folks writing code every day? This acquisition could have major implications for developers working on embedded systems. It may be beneficial to start learning about platforms like CUDA (Compute Unified Device Architecture) and its SDK. Your ability to process large amounts of data in the cloud may speed up, while at the same time, the ability to fit powerful machine learning algorithms onto devices may require less and less memory. Read on to learn the backstory behind this deal and what it will mean for the world of computing and programming.

From graphics to AI

While Nvidia is best known for graphics cards that enable modern video games, the last few years have seen a wealth of new applications for their technology: AI, data processing, and cryptocurrency mining have all turned to Nvidia GPUs. The demand from new areas exploded so fast that, in 2018, a run on Nvidia cards by bitcoin miners led to a global shortage. While the headiest days of the crypto currency boom may have passed, the applications for large scale data processing in parallel continue to proliferate.

Where CPUs are designed for complex logic processes, GPUs are optimized to process many floating point computations in parallel. 3D rendering requires a massive amount of arithmetic calculations as vertices rotate. For machine learning and other large data processing operations, the GPU’s focus on parallel arithmetic is a match made in heaven.

Nvidia already offers products that marry their GPU with Arm chip designs to create data processing workhorses. In late 2019, the company created an Arm-based, GPU accelerated server designed to process information very quickly. Purchasing Arm may be a way to double down on the bet that GPUs will make a popular data processing tool.

As a result of the acquisition, Nivida will now be placed at the forefront of Arm’s IoT ecosystem and cloud-based AI edge computing. Edge computing refers to an approach where information is stored and processed locally rather than in a central data warehouse many miles away. Intel made a similar move when it acquired Movidius.

If edge computing matures the way these chip companies hope, companies can save money gathering information locally and acting upon critical data immediately. For any applications that are latency-sensitive, such as autonomous vehicles, even a millisecond delay in data processing is unacceptable.

With the addition of Arm, it’s possible Nvidia will become a dominant force in everything from microprocessors to tablets, mobile phones to street lamps, washing machines to autonomous vehicles.

What changes in the computing landscape made this possible?

Cloud computing reshaped software architecture in the past few years, where distributed software can automatically scale its computing resources based on its immediate needs. For example, 86% of all enterprises are expected to become dependent upon SaaS (one of the three primary cloud computing categories, along with IaaS and PaaS) for most or all of their software needs by 2022.

At the same time, Moore’s law has held steady and semiconductors have continued to shrink (possibly holding their size in 2021), which has allowed more and more devices to include computing power. Combine that with easy access to scalable computing power, and you have a world where everything is a computer and needs specialized hardware.

As more and more sophisticated applications were developed in these devices, the idea of edge computing rose. Instead of processing data and sending it to a device, now the data is being processed on-device to get around network latency issues or the absence of networks, both of which happen to be prevalent in those countries developing fastest.

As mentioned earlier, those GPUs that Nvidia specializes in are very good at the parallel manipulation and processing of data. One of the most noteworthy advances in the computing industry overall has been the proliferation of GPUs into numerous different solutions (such as 3D mapping, image processing, and deep machine learning), to the point that more traditional CPU power has not been able to keep up.

What could change with the developer landscape

For developers, this may mean that new realms of data processing speed opens up. These data powerhouses could be added to cloud offerings as an add-on, making this a seamless benefit to complex ETL pipelines. For integrated chipsets on smaller devices, it could be that graphics and data processing limits grow, allowing mobile apps with improved graphics and IoT devices with more sophisticated AI.

Speaking of AI, neural nets may become simultaneously more complex and smaller. Specialized AI hardware could be developed to support consumer applications. Neural net software currently used in powerful data processing and forecasting applications could find new use cases, or new applications could arise that provide small scale AI benefits to a wide range of people.

But the biggest effect may be that more developers need to know about the CUDA (Compute Unified Device Architecture) framework SDK. Similar to SIMD intrinsics, this SDK allows programs to directly access the GPU’s parallel processing. And if Nvidia manages to unify the physical memory of the CPU and GPU, it could open up numerous new avenues for optimization and advancement.

In the end, though, much of the details of this may be abstracted away from anyone writing code by libraries and high-level programming languages. The only coders sure to be affected are those working directly with the embedded systems.

Conclusion

Nvidia’s acquisition of Arm is likely to have a lasting effect on the tech industry as a whole. Not only has Nvidia become a much bigger player in the IoT and cloud-based edge computing (to the point that it could be argued they will become the single most influential player), but major corporations that Arm licenses to, such as Apple, Intel, and Samsung, may look to shift to alternative sources for their chip and microprocessor designs.

The biggest benefit that is likely to come out of all of this is the fact that major corporations and smaller startups alike have begun developing AI-based microprocessors that can handle complex neural networks. This means we are likely to see the continued innovation of microprocessors on a scale that we have never seen before.

Login with your stackoverflow.com account to take part in the discussion.