Israeli PRs

Accelerating Mass Business AI Adoption: NeuReality Launches Developer Portal for NR1 Inference Platform, Expanding Affordable AI Access

NeuReality, an AI infrastructure technology company, announced today the release of a new software developer portal and demo for easy installation of its full software stack and APIs. This marks a significant milestone for NeuReality since delivery and activation of its 7nm AI inference server-on-a-chip, the NR1 NAPU™, and successful bring up of its entire NR1 AI hardware and software system in the first quarter.

The NR1™ AI Inference Solution enables businesses and governments to run new AI training models and existing AI applications without over-investing millions in scarce GPUs. Regardless of AI accelerator performance, CPUs remain the primary performance bottlenecks in AI Inference, resulting in excessive power consumption and cost, making the most exciting AI innovations impossible for the majority of organizations to install and operate today.

The NR1 system was deemed customer-ready in Q1 2024 after the NAPU arrived from TSMC Taiwan in December, followed by a successful bring-up in just 90 days. “To activate a complex silicon-to-software AI system so quickly and smoothly within a small start up with an even smaller technical team is simply remarkable,” said Ilan Avital, Chief R&D Officer at NeuReality.

The system successfully met 99 percent of all functionality requirements, covering server-on-chip (SOC), IP, and software aspects. This achievement marked its readiness for early customer pilots, particularly with cloud service providers, financial services, and healthcare sectors for computer vision, automatic speech recognition and natural language processing – laying an affordable foundation for generative AI, multi-modality, and more advanced technologies to come. NeuReality attributed the swift bring-up process of the NR1 system to rigorous emulation testing conducted before the 2023 tape out in collaboration with Synopsys.

The accompanying Software Development Kit (SDK) is designed exclusively for high-volume, high-variety AI workloads in enterprise data centers. It contains hierarchical tools for all types of compute engines and XPUs, along with optimized partitioning – making it easy to install, manage, and scale while giving developers their time back from the traditional complexity of deploying AI Inference.

NeuReality’s solution delivers an unprecedented developer experience with significant flexibility to deploy the most advanced and complex AI pipelines more easily, based on the specific needs of their projects. It empowers developers with toolchain for complete AI pipeline acceleration, orchestration and provisioning and inference runtime APIs to streamline AI deployment workflow. All of this and more is now documented in a new technical whitepaper to inspire innovators to focus urgently on end-to-end data center efficiency and resource optimization to enable affordable AI deployments.

Citing a 35% AI adoption rate globally and lower than 25% rate in the U.S., NeuReality is focused on lowering market barriers to mainstream industries. `It’s simply out of reach to the majority of businesses,” added Avital. “We can start changing that now by reducing high power consumption at the source – and educating customers that the ideal AI Inference servers require fundamentally different and more efficient server configurations than big supercomputers and high-end GPUs used in AI Training.”

For example, NeuReality’s NR1-S™ AI Inference Appliance outperforms the Nvidia DGX H100 System with the same deep learning processing performance but packed with six times data processing performance, half the price, one-third of the energy consumption, and half the physical space – all without requiring a host CPU in the system, relying solely on NR1 to host the AI accelerator. The NR1 engineering involved packing 6.5x more processing power onto the NR1 NAPU, equivalent to 830 CPU cores in a single 4u chassis, while at the same time having enough power to host 10 Nvidia GPUs or any AI accelerator.

“At Cleveland Avenue, we see the immense potential of disruptive AI technologies like NeuReality to revolutionize the retail landscape, particularly within our focus on investments in restaurants, foodtech, and beverage,” said Mingu Lee, Managing Partner, Technology Investments at Cleveland Avenue.

“With NeuReality’s breakthroughs in running large-scale AI models such computer vision for retail analytics, conversational AI for in-store and online virtual assistants and personalized recommendations, and generative AI-powered drive-throughs, we’re not just investing in technology; we’re investing in democratizing AI for ‘have not businesses’ ensuring that even those with razor-thin profit margins can deliver exceptional customer experiences and efficient business processes with the help of AI,” added Lee.

To access the portal and learn more, please visit:

Photo: Neureality founders. From left to right – VP VLSI Yossi Kasus, CEO Moshe Tanach, VP Operations Tzvika Shmueli. Credit – Aviv Kurt.

MORE Israeli PRs

The unicorn Hailo, a pioneering chipmaker of edge artificial intelligence (AI) processors, has appointed Rakefet Russak Aminoach as the Chairwoman of the…

Hailo company team 2024 Credit - Hailo

Hailo, the pioneering chipmaker of edge artificial intelligence (AI) processors, has won the Frost & Sullivan Best Practices Technology Innovation…

Nova Logo

Nova (Nasdaq: NVMI) today announced that it has received multiple orders from leading customers for process control solutions for Gate-All-Around (GAA)…

Autotalks logo

Autotalks, a world leader in V2X (Vehicle-to-Everything) communication solutions, and Secure-IC, a global provider of end-to-end cybersecurity solutions for embedded systems,…

CEVA rebrand

Ceva, Inc. (NASDAQ: CEVA), the leading licensor of silicon and software IP that enables Smart Edge devices to connect, sense…

Hailo 10 Chip

Hailo, the pioneering chipmaker of edge artificial intelligence (AI) processors, today announced it has extended its series C fundraising round…