\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Sometimes I wonder if this is real or if I’m dreaming. In the future, \u003Ca href=\"https://youtu.be/az9nFrnCK60\">this\u003C/a> will be the version of the moon landing kids remember. Is it more accurate than the original? What’s the best way to capture a moment so future viewers will have the same experience as those who recorded the images at the time? \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>So, how can you play around with this kind of system? Today, Intel is releasing a Jupyter notebook that we built. It allows anyone to use this code to experiment with image upscaling. Below, we’ll walk you through some of the basics of how it works and how you can experiment with it. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:heading -->\n\u003Ch2 id=\"h-a-brief-history-of-imagined-image-enhancement\">A brief history of imagined image enhancement\u003C/h2>\n\u003C!-- /wp:heading -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>In \u003Cem>Blade Runner\u003C/em>, there’s a scene where Rick Deckard is trying to get information from video footage. He freezes a frame and calls out “Enhance” and a quadrant. The image zooms in on a section but suddenly comes into sharp focus. At the time, every engineer in the audience said, “No, you can’t do that.”\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:embed {\"url\":\"https://www.youtube.com/watch?v=hHwjceFcF2Q\\u0026ab_channel=PieroMancinelli\",\"type\":\"video\",\"providerNameSlug\":\"youtube\",\"responsive\":true,\"className\":\"wp-embed-aspect-4-3 wp-has-aspect-ratio\"} -->\n\u003Cfigure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio\">\u003Cdiv class=\"wp-block-embed__wrapper\">\nhttps://www.youtube.com/watch?v=hHwjceFcF2Q&ab_channel=PieroMancinelli\n\u003C/div>\u003C/figure>\n\u003C!-- /wp:embed -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>We always have the fantasy to create information from a vacuum. You have an image you want to make it bigger, then enhance the resolution. That's a CSI dream. But in a lower resolution image, that pixel information doesn't exist. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The best tool we had that at that time was \u003Ca href=\"https://helpx.adobe.com/photoshop-elements/using/sharpening.html\">sharpening\u003C/a>. You could put the image in Photoshop or another image editing tool and sharpen the image. This enhances the definition of the edges in an image based on averages of the colors around it. But more often that not, it looks worse. Calling out enhance and pulling a license plate was always a joke. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>But with this technology today, we are not pulling from the vacuum. We can use machine learning to find similar images in the world and reconstruct the image from there. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>This technology promises better resolution video from grainy video, advances in computer vision, and yes, the ability to call out “Enhance!” and find out whose face is reflected in the mirror. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:heading -->\n\u003Ch2 id=\"h-how-machine-learning-can-upscale-video\">How machine learning can upscale video\u003C/h2>\n\u003C!-- /wp:heading -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>To increase the density of pixels in an image or video, our software will need to understand the objects they contain. Most image recognition algorithms use a \u003Ca href=\"https://en.wikipedia.org/wiki/Convolutional_neural_network\">convolutional neural network\u003C/a> (CNN) to determine features important to the objects in an image so that they can match it to known categories. For this process, we only need the features. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>As shown with a sharpened image, the difficulty in upscaling a low resolution image or video comes from the places where the color changes quickly; known as \u003Ca href=\"https://dsp.stackexchange.com/questions/6452/how-to-extract-high-frequency-and-low-frequency-component-using-bilateral-filter#:~:text=Similar%20to%20one%20dimensional%20signals,are%20rapidly%20changing%20in%20space.&text=You%20can%20see%20how%20you,frequency%20detail%20across%20the%20image.\">high frequency details\u003C/a>. Think the branches on a tree or words.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:image -->\n\u003Cfigure class=\"wp-block-image\">\u003Cimg src=\"https://lh5.googleusercontent.com/q7U19U5-t0_vzW-7hKmD1rTnVEEu5TjZVXQmAcZdwU83bXijJrqoUVWwo6irm2lN3bEwyTj1UW2mQEz7JN4K_UythMK9TDVii5f3fGZODP0RrnixdpF6Sqzibg5AbxSruVN4aAU\" alt=\"\"/>\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Our demo uses a modified version of \u003Ca href=\"https://arxiv.org/abs/1807.06779\">this published algorithm\u003C/a>. Using dense blocks of neural nets—that is, nodes that output layers that are closer to the input layer than normal—the CNN can focus its attention on those specific details. High frequency details can get lost in lower levels of normal CNNs; in the denser networks, these tiny textures are accentuated. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:image -->\n\u003Cfigure class=\"wp-block-image\">\u003Cimg src=\"https://lh4.googleusercontent.com/MjaHkzOuMREb-qvnOEPrKfH6YurGn4MdW3DTTJXxgHv_YW6SAOKTvT5kx8wjTZmHXue3jS4CpRbhpbIv2HdHz32IR57YlzQqDUjf52h5Sfxpiq-Ss_8TVX1N2bWNjz2HKI3G9yI\" alt=\"\"/>\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:image -->\n\u003Cfigure class=\"wp-block-image\">\u003Cimg src=\"https://lh5.googleusercontent.com/7iuNFSJCk_-mjO1FtCSIWYi5RSDETYewBE-EB7IFCS4kl6XxBWBobfnuKj4O6jkaWxLTh0ZPlEVeD2erLYNVx6ey0F3FIwjvc3YRBChd3sfmNm73AkVPqqY4uQNHur2yqH5VHLk\" alt=\"\"/>\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>In the demo below, we’ll import a super resolution model, upload a low resolution image, run the model on the image to upscale, and view several different outputs comparing the new image to the original image. It’s all pre-built and ready for you to run!\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:heading -->\n\u003Ch2 id=\"h-get-started-with-our-upscaling-demo\">Get started with our upscaling demo\u003C/h2>\n\u003C!-- /wp:heading -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The demo is contained within two \u003Ca href=\"https://jupyter.org/\">Jupyter notebooks\u003C/a> that contain all the Python code needed to upscale an image or video. You can run them locally in Jupyter, managing the requirements yourself, or you can use Intel(r) DevCloud for the Edge and skip directly to using the demo remotely without installing any additional software. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>To get started locally, follow \u003Ca href=\"https://github.com/openvinotoolkit/openvino_notebooks#-getting-started\">these installation instructions\u003C/a>. Open \u003Ccode>202-vision-superresolution-image.ipynb\u003C/code> in Jupyter. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>To get started with DevCloud, \u003Ca href=\"https://www.intel.com/content/www/us/en/forms/idz/devcloud-enrollment/edge-request.html\">sign up for free\u003C/a>. Once you have created your account, sign in and go to the \u003Ca href=\"https://software.intel.com/content/www/us/en/develop/tools/devcloud/edge/build\">Build\u003C/a> page:\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:list {\"ordered\":true} -->\n\u003Col>\u003Cli>Select \u003Cstrong>Open Jupyter* Notebooks\u003C/strong>\u003Cbr>\u003Cimg src=\"https://lh5.googleusercontent.com/zY7dyLlx_-tG6UtfTXF8jJHXd7i7gtDytxpAEoBYxmumEYR2Q60f2NIsvy0FkeuJQHJOJ2DcjsHGbVWrobvaKWA1GDArTlPsGg89_Yp4qmC_rqhIkge6jH14ayNs2ZPWUdT4bXA\" width=\"624\" height=\"120\">\u003Cbr>\u003Cstrong>Note:\u003C/strong> You may get an HTTP 400 error here. If so, you will need to clear your cache. \u003C/li>\u003Cli>Launch a new server. \u003C/li>\u003Cli>Click \u003Cstrong>New >> Terminal\u003C/strong>. \u003C/li>\u003C/ol>\n\u003C!-- /wp:list -->\n\n\u003C!-- wp:image -->\n\u003Cfigure class=\"wp-block-image\">\u003Cimg src=\"https://lh3.googleusercontent.com/LT6LcKNDtbUMPD6ZDZXSQuh50LhosuImQdhkVpO94tcNgXNxV1EE4sOaXWykPUYEISWi4WpC-WyLkRPTasv2ErZPSRtmOKwOWGvmWM9H7j0kluNDnC8iSlOajs-ixP8nFR8SRSo\" alt=\"\"/>\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:list {\"ordered\":true,\"start\":4} -->\n\u003Col start=\"4\">\u003Cli>In the terminal, clone the OpenVINO demos with the command, \u003Ccode>git clone \u003Ca href=\"https://github.com/openvinotoolkit/openvino_notebooks.git\">https://github.com/openvinotoolkit/openvino_notebooks.git\u003C/a>\u003C/code>\u003C/li>\u003Cli>Return to the Control Panel, then navigate to \u003Ccode>/openvino-notebooks/notebooks/202-vision-superresolution/202-vision-superresolution-image.ipynb\u003C/code> and open the notebook. \u003C/li>\u003C/ol>\n\u003C!-- /wp:list -->\n\n\u003C!-- wp:heading {\"level\":3} -->\n\u003Ch3 id=\"h-using-the-notebook\">Using the notebook\u003C/h3>\n\u003C!-- /wp:heading -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>The notebook is already built and ready to run. Run the entire notebook (\u003Cimg src=\"https://lh5.googleusercontent.com/KNriwQrMAdN3SslW7tSlrTOOVas9GZv_weGhbyAkWmWsbRuy_LQuKcz8jNJOR1GRth6gSuUiKGdoDWNXVPu_9LQAh1dK2ct8KAxUAEUbqFQzFzykyzV_I5sT79eRvYfuSdMV4UM\" width=\"25\" height=\"16\">). You can scroll through the cells to see what each step in the model is doing. Scroll down and you’ll see a comparison between the bicubic sampling method and the superresolution method for a cropped version of the picture. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:image -->\n\u003Cfigure class=\"wp-block-image\">\u003Cimg src=\"https://lh4.googleusercontent.com/ung6zK4uKaqJtPJDoEOOm3kB9fJMTezqQ0MZcIvduSW7qY4-_aGbDvlc4JB_yjTeDg5nw0NA2xstxEUjx5p_QjSN1YY9QnRoSQndFk3q3ziSJ7uSKi8VfWH_1GZYaX-c7baNO_k\" alt=\"\"/>\u003C/figure>\n\u003C!-- /wp:image -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>To run the model on your own image, upload an image to the images folder and change the Image Path in cell 6. You can also change the crop location by changing starty and startx variables in cell 7. \u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:heading -->\n\u003Ch2 id=\"h-conclusion\">Conclusion\u003C/h2>\n\u003C!-- /wp:heading -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Super resolution upscaling puts us one step closer to enhancing details in an image; as you can see in the picture of the tower and the cherry tree above, we were able to enhance the corner well enough to read the words on the flag. But this sort of computer vision application is only as good as the model and CNN behind it, and they can \u003Ca href=\"https://petapixel.com/2020/08/17/gigapixel-ai-accidentally-added-ryan-goslings-face-to-this-photo/\">sometimes do strange things\u003C/a>. What’s exciting about the moment we live in is that the tools needed to work on projects like this, and perhaps to contribute to the next breakthrough, aren’t just available to academics in laboratories or employees at major tech companies. Anyone can start to master the basics of machine learning from home, putting them on the path to great discoveries in the future.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>Intel has a suite of software tools to help continue your AI journey. From conversion and optimization using OpenVINO(™), to benchmarking and prototyping using Intel(R) DevCloud for the Edge, to working with packaged solutions ready for deployment from the Intel(r) Edge Software Hub. \u003Ca href=\"https://www.intel.com/content/www/us/en/artificial-intelligence/posts/few-steps-for-faster-inferencing.html\">Read here\u003C/a> to learn more about how you can leverage these tools, or explore them directly yourself.\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>\u003Ca href=\"https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html\">Download Intel® Distribution of OpenVINO(™) Toolkit\u003C/a>\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>\u003Ca href=\"https://devcloud.intel.com/edge/\">Register for Intel® DevCloud for the Edge\u003C/a>\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>\u003Ca href=\"https://www.intel.com/content/www/us/en/edge-computing/edge-software-hub.html\">Explore Intel® Edge Software Hub\u003C/a>\u003C/p>\n\u003C!-- /wp:paragraph -->\n\n\u003C!-- wp:separator -->\n\u003Chr class=\"wp-block-separator has-alpha-channel-opacity\"/>\n\u003C!-- /wp:separator -->\n\n\u003C!-- wp:paragraph -->\n\u003Cp>\u003Cem>The Stack Overflow blog is committed to publishing interesting articles by developers, for developers. From time to time that means working with companies that are also clients of Stack Overflow’s through our advertising, talent, or teams business. When we publish work from clients, we’ll identify it as Partner Content with tags and by including this disclaimer at the bottom.\u003C/em>\u003C/p>\n\u003C!-- /wp:paragraph -->","html","2021-06-14T14:00:00.000Z",{"current":609},"lets-enhance-use-intel-ai-to-increase-image-resolution-in-this-demo",[611,619,624,629,631,635],{"_createdAt":612,"_id":613,"_rev":614,"_type":615,"_updatedAt":612,"slug":616,"title":618},"2023-05-23T16:43:21Z","wp-tagcat-code-for-a-living","9HpbCsT2tq0xwozQfkc4ih","blogTag",{"current":617},"code-for-a-living","Code for a Living",{"_createdAt":612,"_id":620,"_rev":614,"_type":615,"_updatedAt":612,"slug":621,"title":623},"wp-tagcat-machine-learning",{"current":622},"machine-learning","machine learning",{"_createdAt":612,"_id":625,"_rev":614,"_type":615,"_updatedAt":612,"slug":626,"title":628},"wp-tagcat-partner-content",{"current":627},"partner-content","Partner Content",{"_createdAt":612,"_id":625,"_rev":614,"_type":615,"_updatedAt":612,"slug":630,"title":628},{"current":627},{"_createdAt":612,"_id":632,"_rev":614,"_type":615,"_updatedAt":612,"slug":633,"title":634},"wp-tagcat-partnercontent",{"current":634},"partnercontent",{"_createdAt":612,"_id":636,"_rev":614,"_type":615,"_updatedAt":612,"slug":637,"title":638},"wp-tagcat-resolution",{"current":638},"resolution","Let’s enhance: use Intel AI to increase image resolution in this demo",[641,647,653,659],{"_id":642,"publishedAt":643,"slug":644,"sponsored":12,"title":646},"370eca08-3da8-4a13-b71e-5ab04e7d1f8b","2025-08-28T16:00:00.000Z",{"_type":10,"current":645},"moving-the-public-stack-overflow-sites-to-the-cloud-part-1","Moving the public Stack Overflow sites to the cloud: Part 1",{"_id":648,"publishedAt":649,"slug":650,"sponsored":598,"title":652},"e10457b6-a9f6-4aa9-90f2-d9e04eb77b7c","2025-08-27T04:40:00.000Z",{"_type":10,"current":651},"from-punch-cards-to-prompts-a-history-of-how-software-got-better","From punch cards to prompts: a history of how software got better",{"_id":654,"publishedAt":655,"slug":656,"sponsored":12,"title":658},"65472515-0b62-40d1-8b79-a62bdd2f508a","2025-08-25T16:00:00.000Z",{"_type":10,"current":657},"making-continuous-learning-work-at-work","Making continuous learning work at work",{"_id":660,"publishedAt":661,"slug":662,"sponsored":12,"title":664},"1b0bdf8c-5558-4631-80ca-40cb8e54b571","2025-08-21T14:00:25.054Z",{"_type":10,"current":663},"research-roadmap-update-august-2025","Research roadmap update, August 2025",{"count":666,"lastTimestamp":667},8,"2024-01-04T16:26:21Z",["Reactive",669],{"$sarticleModal":670},false,["Set"],["ShallowReactive",673],{"sanity-Lcsn4-NXQEgRoy2BL0nBQ8BkWwRcaNAO4U1HpKPfeuY":-1,"sanity-comment-wp-post-18298-1756494428726":-1},"/2021/06/14/lets-enhance-use-intel-ai-to-increase-image-resolution-in-this-demo"]