As we recently discussed in our article Advancing the Era of Artificial Intelligence with Innovative Semiconductor Technology, the industry has reached a pinnacle point when it comes to innovation in artificial intelligence (AI). Technological progress across in-memory computing and parallel processing is finally catching up with and able to handle the bandwidth-intensive workloads driven by advanced deep learning and AI, ushering in the next era of AI innovation for devices across the ecosystem.
We previously explored the importance of in-memory computing and the need for high-bandwidth memory (HBM) interfaces and DRAM solutions for advanced data processing. The other important piece of the puzzle is a powerful processor—one that can provide parallel processing to handle advanced data analysis. Through enhanced speed and performance, next-generation processors can accelerate the future of deep learning, computer vision, natural language processing and other AI applications.
Not only does AI have the potential to transform myriad industries, ranging from healthcare, to manufacturing, to retail and more, AI also promises to enrich our experiences with the devices around us. In Advancing the Era of Artificial Intelligence with Innovative Semiconductor Technology, we explained how server technology has fallen behind due to the massive influx in data we are experiencing, and how in-memory technology helps solve the bottleneck by increasing data indexing and transaction speeds.
Similarly, the potential transformation for personal mobile devices is nearly limitless. Deep learning and AI promise an exciting range of new applications, from intelligent personal assistants, to smart speakers, to language translation and AI photo filters. Looking beyond personal mobile devices, there is also a huge opportunity to leverage AI when it comes to the Internet of Things (IoT).
Consumers have embraced personal voice assistants over the past several years – including Amazon Alexa, Apple Siri, Google Assistant, Microsoft Cortana and others – and the industry is now starting to apply personal assistant technology to the connected home, delivering orders and requests to home appliances ranging from lights to stereo systems, TVs to refrigerators. As our homes become more connected – with processing power and connectivity integrated into all the things around us – it means that not only are we generating even more data than ever before, but we are also opening up greater possibilities when it comes to harnessing the power of that data for customized intelligence.
We have well established the need for advanced memory when it comes to bandwidth-intensive AI applications, and Samsung is leading the way in providing the industry with Universal Flash Storage (eUFS) and Low Power DDR4X (LPDDR4X). By leveraging advanced sensing capabilities to identify and process user patterns, these innovative memory technologies can provide even more customized personal assistance to mobile users.
Leading the global mobile device DRAM market, Samsung’s LPDDR4X enables the next generation of ultra-slim mobile devices. With the industry’s highest speeds, LPDDR4X can handle the intense requirements of AI and deep learning, supporting faster multitasking, higher capacities and lower power consumption to ultimately drive the best user experiences. Samsung’s LPDDR4X comes in a slim form factor to extend its performance and speed to IoT devices, processing data across the connected home and mobile, and bridging the gap for smarter, inventive possibilities across devices.
Samsung’s high-speed eUFS advances the AI industry by providing the highest density solution – up to 512GB – for 64-layer V-NAND Flash. With ultra-fast speeds, eUFS can deliver the groundbreaking performance needed to search through multiple images simultaneously for AI photo filtering, store 4K and 8K multimedia content, and power a range of augmented reality (AR) and virtual reality (VR) devices. The promise of an IoT connected home is closer to reality with eUFS memory technology collecting, storing and processing data across automotive and mobile solutions, as well as multi-lens devices like drones and action cameras.
Both LPDDR4X and eUFS leading memory technologies can store and process collected data for fast, more secure services through Samsung Exynos processors, enabling the next era of on-device AI and security solutions. By leveraging advanced sensing technology and deep learning to understand user patterns, the personalized assistants of tomorrow will be even more customized, autonomous and sophisticated.
As more of our personal devices become connected and we continue to generate more data, AI and deep learning can process and analyze these valuable data insights in order to improve our lives. However, there are significant technical challenges that need to be overcome in order to create an efficient deep learning environment across a variety of AI experiences in a mobile setting.
Enter Samsung Exynos: a processor specifically designed for deep learning, in order to implement an AI environment anywhere, anytime. Samsung realized years ago that in order to efficiently meet the intensive demands of deep learning, the industry needed new technology that implants AI model into a mobile device for lower latency, better power efficiency, and stronger security when compared to utilizing AI model in cloud.
For on-device AI, the Exynos 9 series 9820 processor features neural processing unit (NPU) to deliver deep learning processing capabilities paired with premium features, including a 4thgeneration custom CPU and faster multi-gigabit LTE modem. By managing massive amounts of data on-device at low power, neural processing unit significantly reduces the time required for deep learning. With sophisticated image processing technologies and strong security, the Exynos 9820 processor’s AI capabilities enable a mobile device to execute advanced applications, such as accurately identifying images in photos for efficient search and organization or scanning a user’s face in 3D using depth-sensing technology for hybrid face detection.
Samsung Semiconductor is accelerating the digital transformation to AI by upgrading image recognition, language processing and data analysis across device types. With solutions like LPDDR4X, eUFS and Exynos processors, Samsung is heralding a more efficient deep learning environment for AI experiences across mobile and IoT. As the industry continues to come together and develop technology that can handle the influx of complex, data-intensive processes that AI promises, we can unlock the potential to bring greater innovation to AI and continue to transform the way we work, live and play.
At this year's MWC in Barcelona, Samsung Semiconductor exhibited its latest solutions and products for mobile applications
Date: 2019-11-22Reference server design guidelines for NGSFF SSD from Samsung describing Mission Peak architecture and other details.
Date: 2019-11-22Watch the video to check out Samsung NF1 SSD, small form factor that has high capacity and compatibility.
Date: 2019-11-22Greater memory will be a critical part of the design and implementation of 5G networks. To address the advanced memory needs of 5G networks, Samsung offers a growing array of products.
Date: 2019-11-22Watch the video below to check out Samsung Z-SSD, an SSD that boasts a new level of performance.
Date: 2019-11-22Samsung Z-SSD SZ985, a new type of Ultra-low Latency SSD for Enterprise and Data Centers
Date: 2019-11-22Samsung announced today that 46 of its new product innovations have been recognized as CES® 2020 Innovation Awards winners, including three Best of Innovations accolades.
Date: 2019-11-22Samsung’s new 256Gb V-NAND features industry’s fastest data transfer speed, while being the first to apply the ‘Toggle DDR 4.0’ NAND interface
Date: 2019-11-22