| Page 90 | Kisaco Research

Revterra’s Kinetic Stabilizer is engineered to handle the massive and volatile power swings demanded by large-scale AI workloads. AI is bottlenecked by infrastructure and requires a rapidly scalable, high-performance power quality solution that can be deployed without fear of supply chain disruption. Our battery-free technology provides a stable bridge between the grid and AI loads with a physically instantaneous, passive response—no power electronics required. Unlike conventional solutions, the Kinetic Stabilizer offers unmatched cost-effectiveness on a per-kW basis and a functionally infinite cycle life, free from the constraints of chemical storage.

Author:

Ben Jawdat

Founder & CEO
Revterra

Ben Jawdat is the founding CEO of Revterra, where he is working to commercialize a kinetic stabilizer solution to solve power quality challenges at AI datacenters and other commercial/industrial sites. Prior to starting Revterra, he worked on the development of new superconducting materials at the University of Houston where he received his PhD in physics, and completed postdoctoral studies at the Air Force Research Laboratory and Rice University.

Ben Jawdat

Founder & CEO
Revterra

Ben Jawdat is the founding CEO of Revterra, where he is working to commercialize a kinetic stabilizer solution to solve power quality challenges at AI datacenters and other commercial/industrial sites. Prior to starting Revterra, he worked on the development of new superconducting materials at the University of Houston where he received his PhD in physics, and completed postdoctoral studies at the Air Force Research Laboratory and Rice University.

Flexnode’s approach delivers a strategic advantage over traditional data center construction by transferring complexity off the job site and into a controlled manufacturing environment. Our modules are engineered for speed, scalability, and geographic flexibility—purpose built to support the rapidly evolving demands of AI and high-performance compute workloads. By industrializing the data center, we ensure repeatable, high-quality outcomes that can be rapidly deployed with minimal on-site work.

Author:

Tony Hall

CTO
Flexnode

Tony Hall is the Chief Technology Officer, responsible for the detailed design, fabrication and delivery of our products. Tony has 15 years of experience in the design and construction industry, ranging from design consulting to project planning and management. On the design side, Tony focused on electrical engineering in New York City. Among a variety of project types, mission critical and sophisticated standby power systems were a focus, including projects for Digital Realty, Google, zColo, Telehouse. Since graduate school, Tony broadened his expertise, managing large scale electric utility projects for Exponent and most recently with Clark Construction Group as a Systems Executive in preconstruction, responsible for all mechanical, electrical, plumbing, low voltage and fire protection systems (MEP/LV/FP) in project deliveries ranging from traditional design-bid-build to public private partnerships. Tony brings a depth of understanding across all engineering disciplines and delivery methods, and provides the executive direction for Flexnode’s data centers.

Tony Hall

CTO
Flexnode

Tony Hall is the Chief Technology Officer, responsible for the detailed design, fabrication and delivery of our products. Tony has 15 years of experience in the design and construction industry, ranging from design consulting to project planning and management. On the design side, Tony focused on electrical engineering in New York City. Among a variety of project types, mission critical and sophisticated standby power systems were a focus, including projects for Digital Realty, Google, zColo, Telehouse. Since graduate school, Tony broadened his expertise, managing large scale electric utility projects for Exponent and most recently with Clark Construction Group as a Systems Executive in preconstruction, responsible for all mechanical, electrical, plumbing, low voltage and fire protection systems (MEP/LV/FP) in project deliveries ranging from traditional design-bid-build to public private partnerships. Tony brings a depth of understanding across all engineering disciplines and delivery methods, and provides the executive direction for Flexnode’s data centers.

Semiconductor development faces increasing complexity, faster timelines, and fierce competition, exposing the limitations of traditional EDA tools. In response, AI Agents, powered by LLMs and advanced algorithms, are emerging as next-gen solutions. This session explores how these agents surpass conventional automation by independently managing tasks like hardware modeling, constraint solving, debugging, testbench creation, and design optimization. We'll cover real-world use cases showing how AI Agents deliver improved productivity, design quality, and time-to-market, including their ability to autonomously detect bugs and optimize RTL designs.

Author:

Mehir Arora

Founding Engineer
ChipAgents

Mehir Arora is a founding engineer at ChipAgents, a company at the forefront of integrating agentic AI into Electronic Design Automation (EDA) workflows. Graduated from UC Santa Barbara, Mehir has contributed to advancing state-of-the-arts in AI methodologies, including a paper presented at ICML 2024. At ChipAgents, he focuses on developing agentic AI tools that enhance chip design and verification processes, aiming to significantly improve efficiency and productivity in semiconductor engineering. 

Mehir Arora

Founding Engineer
ChipAgents

Mehir Arora is a founding engineer at ChipAgents, a company at the forefront of integrating agentic AI into Electronic Design Automation (EDA) workflows. Graduated from UC Santa Barbara, Mehir has contributed to advancing state-of-the-arts in AI methodologies, including a paper presented at ICML 2024. At ChipAgents, he focuses on developing agentic AI tools that enhance chip design and verification processes, aiming to significantly improve efficiency and productivity in semiconductor engineering. 

Arm Neoverse is designed to meet these evolving needs, offering high compute density, exceptional energy efficiency, and a strong total cost of ownership (TCO). As host processors, Neoverse-based CPUs integrate seamlessly with GPUs and AI accelerators to enable flexible, power-efficient, and high-performance deployments across heterogeneous AI platforms capable of managing the complexity and coordination required by agentic AI systems.

In this session, we’ll demo an agentic AI application running on an AI server powered by Arm Neoverse as the host node. The application coordinates multiple agents to accelerate decision-making and streamline workload execution. We’ll also highlight the advantages of running agentic AI on heterogeneous infrastructure, explain why Arm CPUs are ideal as host processors, and demonstrate how Arm provides a scalable, efficient foundation for real-world enterprise and cloud environments.

Author:

Na Li

Principal Solution Architect
arm

Na Li is Principal AI Solution Architect for the Infrastructure Line of Business (LOB) at Arm. She is responsible for creating AI solutions that showcase the values on Arm-based platforms. She has around 10 years of experience developing AI applications across various industries. Originally trained as a computational neuroscientist and received a PhD from the University of Texas at Austin. 

Na Li

Principal Solution Architect
arm

Na Li is Principal AI Solution Architect for the Infrastructure Line of Business (LOB) at Arm. She is responsible for creating AI solutions that showcase the values on Arm-based platforms. She has around 10 years of experience developing AI applications across various industries. Originally trained as a computational neuroscientist and received a PhD from the University of Texas at Austin. 

AI inference costs are high and workloads are growing, especially when low latency is required. We demonstrate NorthPole's energy efficiency and high throughput for low-latency edge and datacenter inference tasks.

Author:

John Arthur

Principal Research Scientist
IBM

John Arthur is a principal research scientist and hardware manager in the brain-inspired computing group at IBM Research - Almaden. He has been building efficient and high-performance brain-inspired neural network chips and systems for the last 25 years, including Neurogrid at Stanford and both TrueNorth and NorthPole at IBM. John holds a PhD in bioengineering from University of Pennsylvania and BS in electrical engineering from Arizona State University.

John Arthur

Principal Research Scientist
IBM

John Arthur is a principal research scientist and hardware manager in the brain-inspired computing group at IBM Research - Almaden. He has been building efficient and high-performance brain-inspired neural network chips and systems for the last 25 years, including Neurogrid at Stanford and both TrueNorth and NorthPole at IBM. John holds a PhD in bioengineering from University of Pennsylvania and BS in electrical engineering from Arizona State University.

Author:

Manuel Botija

VP, Product Management
Axelera

Manuel Botija is an engineer with degrees from Telecom Paris and Universidad Politécnica de Madrid. Over the past 17 years, he has led product innovation in semiconductor startups across Silicon Valley and Europe. Before joining Axelera, Manuel served as Head of Product at GrAI Matter Labs, which was acquired by Snap Inc.

Manuel Botija

VP, Product Management
Axelera

Manuel Botija is an engineer with degrees from Telecom Paris and Universidad Politécnica de Madrid. Over the past 17 years, he has led product innovation in semiconductor startups across Silicon Valley and Europe. Before joining Axelera, Manuel served as Head of Product at GrAI Matter Labs, which was acquired by Snap Inc.

Outdated x86 CPU/NIC architectures bottleneck AI's power, limiting true Generative AI potential. NeuReality's groundbreaking NR1® Chip combines entirely new categories of AI-CPU and AI-NIC into one single chip, fundamentally redefining AI data center inference solutions. It solves these bottlenecks, boosting Generative AI token output up to 6.5x for the same cost and power versus x86 CPU systems, making AI widely affordable and accessible for businesses and governments. It works in harmony with any AI Accelerator/GPU, maximizing GPU utilization, performance, and system energy efficiency. Our NR1® Inference Appliance, with its built-in software, intuitive SDK, and APIs, comes preloaded with out-of-the-box LLMs like Llama 3, Mistral, DeepSeek, Granite, and Qwen for rapid, seamless deployment with significantly reduced complexity, cost, and power consumption at scale.

Author:

Moshe Tanach

Co-Founder & CEO
NeuReality

Moshe Tanach is Founder and CEO at NeuReality.

Before founding NeuReality, he served as Director of Engineering at Marvell and Intel, leading complex wireless and networking products to mass production.

He also served as Appointed Vice President of R&D at DesignArt-Networks (later acquired by Qualcomm) developing 4G base station products.

He holds Bachelor of Science in Electrical Engineering (BSEE) from the Technion, Israel, Cum Laude.

Moshe Tanach

Co-Founder & CEO
NeuReality

Moshe Tanach is Founder and CEO at NeuReality.

Before founding NeuReality, he served as Director of Engineering at Marvell and Intel, leading complex wireless and networking products to mass production.

He also served as Appointed Vice President of R&D at DesignArt-Networks (later acquired by Qualcomm) developing 4G base station products.

He holds Bachelor of Science in Electrical Engineering (BSEE) from the Technion, Israel, Cum Laude.