\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Traditionally, Xilinx has provided hardware solutions for vision AI applications at a chip level. This still leaves a big challenge: developing the remainder of the hardware platform and the software infrastructure (board-support package) for that platform. If you’re a software engineer just looking to experiment, creating an optimized, cost-effective prototype powerful enough to accelerate a wide range of vision and AI pipelines and suitable to work beyond your prototypes might be outside of your skillset. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Furthermore, AI is a rapidly changing domain, with the implication that inference accelerators such as SoCs (system-on-a-chip) and GPUs may be out-of-date by the time you are ready for production. Xilinx Adaptive SoC technology, incorporated into the Kria SOM, leverages \u003Ca href=\"https://www.xilinx.com/products/silicon-devices/fpga/what-is-an-fpga.html\">FPGA\u003C/a> technology to enable future customization of the underlying neural network accelerator and vision pipeline. This flexibility ensures that your chosen platform remains relevant as technology needs shift: future-proofing your design.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>On the downside, FPGA programming has historically been a difficult and specialized skill. You first need to know how to design digital circuits—you’re essentially creating specialized processor logic to execute specific algorithms—and understand what registers, adders, multiplexers, and lookup tables all do. Then you have to create that logic in a hardware description language, like \u003Ca href=\"https://stackoverflow.com/tags/verilog/info\">Verilog\u003C/a> or \u003Ca href=\"https://stackoverflow.com/tags/vhdl/info\">VHDL\u003C/a>, both of which may look like C or Pascal (respectively), but require a different mindset to code well. On top of that, you need to learn to use the FPGA development tools.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Furthermore, neural network architecture can have a significant effect on overall performance, and that can matter a lot for energy-conscious hardware projects. While inference benchmarks for simple classification networks are available for virtually any inference hardware platform, benchmarks for complex networks that solve real-world problems may not be readily available, so you may go into a hardware project not knowing how well your chosen network performs. In my experience, many companies find midway through their project that their chosen platform fails to meet performance requirements with the result that they have to go back to the drawing board. Marketing feature-creep further exacerbates this problem. Is it any wonder that some 87% of AI \u003Ca href=\"https://venturebeat.com/2019/07/19/why-do-87-of-data-science-projects-never-make-it-into-production/\">projects\u003C/a> never make it to production?\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>For example, consider that many vision applications use convolutional neural networks (CNN). In classical convolution, every input channel has a mathematical impact on every output channel. If we have 100 input channels and 100 output channels, there are 100x100 virtual paths. In 2017, a team of researchers created MobileNet CNNs that were computationally efficient without sacrificing accuracy. Their novel technique used depthwise convolution to replace classical convolution. However, with depthwise convolution, each input channel only impacts one output channel, with the result that we save a lot of computation. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>But no solution comes without some tradeoffs. Computationally-efficient networks are not necessarily hardware-friendly. As a result, GPUs and other inference accelerator architectures could not fully realize the theoretical performance gains of depthwise convolution.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The catch was that the device was still processing the same amount of data. There are fewer computations, but each is handling more data, so memory bandwidth becomes the system bottleneck. If not architected with depthwise convolution in mind, the neural network accelerator becomes memory-bound and thus achieves lower levels of efficiency as many elements of the accelerator array sit like dark servers in a data center, consuming power and space, while performing no useful work. The result was that a chip designed before this technique was known wouldn’t yield the expected performance gains. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>But what if your chosen hardware platform leveraged FPGA-based architectures that could be reconfigured in the field to ensure optimum performance as new network architectures or techniques such as depthwise convolution become available? That’s the advantage of the Xilinx Adaptive SoC on the Kria KV26 SOM.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>We’ve covered how the system works and why it’s adaptable. Still, if you’re just starting a vision AI hardware project, solving the multi-dimensional equation of how to get started may scare you away. This is where Kria SOMs and the KV260 Vision AI Starter Kit come in.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:heading -->\n\u003Ch2 id=\"h-a-simpler-way-to-get-started\">A simpler way to get started\u003C/h2>\n\u003C!-- /wp:heading -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>At Xilinx, we realized that we could give engineers more bang for their buck if we focused on specific vision AI use cases and provided a production-ready platform that would enable developers to get to market quickly. From the hardware perspective, the solution is simple – provide the key components as a single board solution: System-on-module (SOM). SOMs have the advantage that they incorporate the processor, memory, key peripherals, and more on a single board, ensuring that you have the basic elements needed to target vision AI. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:image -->\n\u003Cfigure class=\"wp-block-image\">\u003Cimg src=\"https://lh5.googleusercontent.com/hbihXQgQwfUh6j1-EIJZmk0sfj6RNPGRbWpk1uBzr9-CyQWRxCd337hxP-F0QCfscdU0kgwtOeELsIYjWdIUkoQFuC3Bbs6Kgf2SOWid4rstf493XH8qgyAThnN0_kTKAp57Cy__\" alt=\"\"/>\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Think of the SOM like a video game console: a specialized piece of hardware designed for a very specific purpose. In the case of the Kria KV26, that purpose is vision AI. By tailoring the hardware to this specific application, we optimize the hardware size and cost. Plus, it’s production-ready, available from Xilinx in both industrial and commercial grade offerings.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Now that you have hardware, there are a few details to take care of: configuring the Xilinx Adaptive SoC on the Kria SOM and compiling your AI model to run on the platform. We wanted developers to be able to take advantage of the power of Adaptive SoCs without having to develop the underlying vision pipeline and neural network accelerator. Still, for developers who do need something custom, Xilinx has partners that will design the right logic for the project, or, you can leverage the Vitis tools to integrate your own customizations without becoming an FPGA expert. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Xilinx has historically offered PetaLinux, a Yocto-based flavor of Linux where there is no single distribution binary. Users download the sources, then configure and build them before they can start developing applications. Hardcore embedded developers typically love this style—they know Linux and want full control of what’s in their kernel.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>But for those developers who are looking to get up and running immediately without having to build their entire OS, we are introducing \u003Ca href=\"https://ubuntu.com/download/xilinx\">Ubuntu\u003C/a> for Kria SOMs, with publication of Ubuntu images optimized directly for Xilinx by Canonical. Ubuntu is the most popular open-source distribution of Linux and is generally familiar to developers. You can download the Ubuntu binary image for Kria, boot, and start developing. With Ubuntu, you can choose packages that are familiar to you and install them on Kria. The option to leverage Ubuntu with Linux packages accelerates the development of vision AI platforms.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:image {\"width\":270,\"height\":122} -->\n\u003Cfigure class=\"wp-block-image is-resized\">\u003Cimg src=\"https://lh4.googleusercontent.com/9r34SwhhopBqmYh-Sm9pGhKZwL9qobBzGkly9DweDeGKj9I09gDB9r3VR-miytKEP6ziiptdLcS827rFAdqIWkKXYSQ4rkcTvQYZleon5XOd_7SgChCz7JOmE4_0VIsorrk3pkrf\" alt=\"The Ubuntu logo\" width=\"270\" height=\"122\"/>\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The final production implementation can leverage bitbake recipes to compile a production Linux image that includes the libraries that you need for deployment. Alternatively, you can obtain a production Ubuntu license from Canonical, thus retaining the ability to dynamically install packages. Whether you are a hard-core Yocto developer or a pure software developer who loves Ubuntu, you can get started with Kria and take either path for production.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Along with making it easier to get started with embedded Linux, we wanted to save developers time when they don’t need to reinvent the wheel. For most vision AI use cases, you’re not solving unique problems. We and our partners have created plug-and-play Accelerated Applications. Xilinx Accelerated Applications are open-source and enable the most common AI deployment topologies (such as a multi-stream AI appliance or Smart Camera). Additionally, domain expert partners such as Aupera Technologies and Uncanny Vision have developed additional apps that you can use for a fee. All the Accelerated Applications are available via the Xilinx App Store, the industry’s first app store for vision AI. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:image -->\n\u003Cfigure class=\"wp-block-image\">\u003Cimg src=\"https://lh3.googleusercontent.com/FGMCM2nwHBmgUIICe4BcG8t2grhmDWKI-vt_UruhMd-e8gcP_WOIMQetgf3GzP0olHMX_M57DOxsU2q3pXuA7UAEtte_CNug5LvTDem97JNsSIhpcZmzJKim3xkDpZpK4WD2A4Kw\" alt=\"\"/>\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Just like with our Linux kernel options, we give our users four levels of customization when leveraging Xilinx Accelerated Applications:\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:list {\"ordered\":true} -->\n\u003Col>\u003Cli>Design purely at the application software. The app itself does most of the heavy lifting, processing video data and producing vision data. \u003C/li>\u003Cli>Swap out the default AI model with a customer-trained AI model with Vitis AI. You get more control over the model, but still avoid FPGA design. \u003C/li>\u003Cli>Change the FPGA design, but in a familiar software language like Python, C, C++, or OpenCL using Xilinx’s Vitis tool. Vitis has optimized libraries like the xfOpenCV library, one of the most popular and long-standing libraries to implement vision functions such as color space conversion, rotation, and filtering. \u003C/li>\u003Cli>Fully customize the FPGA using the Xilinx Vivado tool. Obviously, this is not required, but anyone who has this expertise can take full advantage of the adaptable SoC’s flexibility. \u003C/li>\u003C/ol>\n\u003C!-- /wp:list -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>With a modular and flexible approach, any software developer can start playing with vision AI hardware projects and get good results. We ran a test using the Uncanny Vision license plate app running on the Kria SOM and a commercial-grade competitor. Our board ran with 1.5 times the performance while consuming 33% less power per stream. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Of all the advances in computer vision AI in the last few years, we think that the most significant is the increased accessibility of vision AI hardware. Democratizing these solutions will make it easier for any developer to create something dazzling. Ready to start building vision AI applications with Kria? Here are a few links that will put you on the right path to your first development.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Next Steps:\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Check out the \u003Ca href=\"https://www.xilinx.com/products/som/kria.html?source=xblog&medium=website&campaign=vision_som&content=blog\">Xilinx Kria Product Page\u003C/a>\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Download the \u003Ca href=\"https://ubuntu.com/download/xilinx\">Ubuntu image for Kria KV260 today\u003C/a>!\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Get Started with the \u003Ca href=\"https://www.xilinx.com/products/som/kria/kv260-vision-starter-kit/kv260-getting-started/getting-started.html?source=xblog&medium=website&campaign=vision_som&content=blog\">Kria KV260 Vision AI Starter Kit\u003C/a>\u003C/p>\n\u003C!-- /wp:paragraph -->","html","2021-12-08T15:00:34.000Z",{"current":499},"vision-ai-hardware-for-software-developers",[501,509,514,518,523,528,530,534],{"_createdAt":502,"_id":503,"_rev":504,"_type":505,"_updatedAt":502,"slug":506,"title":508},"2023-05-23T16:43:21Z","wp-tagcat-artificial-intelligence","9HpbCsT2tq0xwozQfkc4ih","blogTag",{"current":507},"artificial-intelligence","artificial intelligence",{"_createdAt":502,"_id":510,"_rev":504,"_type":505,"_updatedAt":502,"slug":511,"title":513},"wp-tagcat-code-for-a-living",{"current":512},"code-for-a-living","Code for a Living",{"_createdAt":502,"_id":515,"_rev":504,"_type":505,"_updatedAt":502,"slug":516,"title":517},"wp-tagcat-hardware",{"current":517},"hardware",{"_createdAt":502,"_id":519,"_rev":504,"_type":505,"_updatedAt":502,"slug":520,"title":522},"wp-tagcat-iot",{"current":521},"iot","IoT",{"_createdAt":502,"_id":524,"_rev":504,"_type":505,"_updatedAt":502,"slug":525,"title":527},"wp-tagcat-partner-content",{"current":526},"partner-content","Partner Content",{"_createdAt":502,"_id":524,"_rev":504,"_type":505,"_updatedAt":502,"slug":529,"title":527},{"current":526},{"_createdAt":502,"_id":531,"_rev":504,"_type":505,"_updatedAt":502,"slug":532,"title":533},"wp-tagcat-partnercontent",{"current":533},"partnercontent",{"_createdAt":502,"_id":535,"_rev":504,"_type":505,"_updatedAt":502,"slug":536,"title":538},"wp-tagcat-vision-ai",{"current":537},"vision-ai","vision AI","Vision AI hardware for software developers",[541,547,553,558],{"_id":542,"publishedAt":543,"slug":544,"sponsored":12,"title":546},"9fd8968d-abaa-4253-b14b-3129c6e85408","2025-09-10T17:00:00.000Z",{"_type":10,"current":545},"ai-vs-gen-z","AI vs Gen Z: How AI has changed the career pathway for junior developers",{"_id":548,"publishedAt":549,"slug":550,"sponsored":12,"title":552},"1d082483-6dc6-424b-8b09-9c84b54779da","2025-09-02T17:00:00.000Z",{"_type":10,"current":551},"back-to-school-developers-at-stack-overflow-have-some-advice-for-you","Back to school? Developers at Stack Overflow have some advice for you",{"_id":554,"publishedAt":549,"slug":555,"sponsored":12,"title":557},"5cd91820-9515-4be5-87ae-e919fd443c18",{"_type":10,"current":556},"getting-started-on-stack-overflow-a-step-by-step-guide-for-students","Getting started on Stack Overflow: a step-by-step guide for students",{"_id":559,"publishedAt":549,"slug":560,"sponsored":12,"title":562},"614538a9-c352-4024-adf1-fa44a9f911b6",{"_type":10,"current":561},"stack-overflow-is-helping-you-learn-to-code-with-new-resources","Stack Overflow is helping you learn to code with new resources",{"count":564,"lastTimestamp":12},0,["Reactive",566],{"$sarticleModal":567},false,["Set"],["ShallowReactive",570],{"sanity-7dL6TJqmH_JRhWQS3qA-m6CL5a2D4ePE5PM10Y075V4":-1,"sanity-comment-wp-post-19222-1758003679424":-1},"/2021/12/08/vision-ai-hardware-for-software-developers"]