Natural Intelligence Systems Graduates from DARPA VIP Program


Since announcing the AI Next campaign in September 2018, DARPA, the Defense Advanced Research Projects Agency, has put a concentrated focus on driving innovation in next generation artificial intelligence (AI). DARPA’s Artificial Intelligence Exploration (AIE) Opportunity initiative launched the Virtual Intelligence Processing (VIP) program in June 2019 to continue its investments in the future of AI research and technology advancements.

   In June 2019, the team at Natural Intelligence Systems (NIS) was accepted into the DARPA VIP program. The program’s objectives were to explore radically new and innovative, brain-inspired computing methodologies for machine learning. A primary goal of the VIP program focused on validating the potential for 10X improvement in energy efficiency and data rate handling capability for a Department of Defense (DoD) relevant challenge problem.

What was our goal? – Develop AI 3.0

Natural Intelligence Systems is enabling a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. NIS imagines machines with human-like capabilities – enabling people to make faster, better informed decisions. We are building a system that learns quickly, adapts to a changing world, and brings transparency and trust to decision making.  We are doing this in a way that dramatically increases the performance and reduces energy needs so these AI systems can be used wherever the problems exist.

The neuromorphic machine learning (NML) system involves the integration of brain-inspired computational theory, learning algorithms, and data structures with an innovative non-von Neumann device architecture. With this AI system, NIS is achieving 3rd wave properties unachievable in AI 2.0 systems.   

In the VIP program NIS significantly advanced the development of the NML system. The program included the study and documentation of the system’s computational and mathematical theory and algorithms, and the refinement and implementation of the NML cortical model.  The model enabled the research of the system’s 3rd wave properties, as well as the development of tools and methods to exploit these properties. It also supported the benchmark testing of the NML system versus a neural network (NN) model executed on a state-of-the-art (SOA) GP-GPU processor. The testing compared performance, energy efficiency and multiple 3rd wave properties using a DoD relevant dataset with results that far exceed the DARPA VIP program objectives

How is it done today? – Second wave AI 

Today’s 2nd wave deep learning systems are built upon well understood multi-layer deep neural network models and von Neumann processing architectures. These foundations have enabled the strong growth of the technology but are now sources of the fundamental weaknesses of NNs. These weaknesses include the need for extremely large data sets, unexplainable or black box results, and vulnerability to noise.

 NNs use complex statistical methods to compute weights (train) of large networks. This requires huge training data sets and computational overhead. In 2018,, an AI research and development company, performed an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time. When combined with the end of Moore’s law, these rapidly increasing computational cost and power trends are unsustainable.    

Continuous learning, while possible in theory, is not practical using these networks. When the environment changes and new information is present, the network must be retrained with a dataset that includes both the original and the new training information. This is required to enable the network’s heuristics to adjust to maintain accurate predictions for the classes in the newly combined dataset. New dataset collection and curation, combined with the time and energy required to retrain and optimize the neural network as the environment changes make continuous learning very expensive or impossible in near real time.        

Today there is no good way to explain the decisions of most 2nd wave models. DNNs are unintelligible to humans, and therefore training and test results remain unexplainable. These DNNs are comprised of many layers, make decisions based on heuristics, and do not preserve any of the input semantics in the calculated outcome. There is much research on this topic and efforts include using alternative models, such as decision trees typically with lower accuracy, or hybrid combinations of models.

The relatively low number of feature-dimensions used in neural networks cause these systems to be vulnerable to noise. Even a single change in an input bit may totally change the network’s behavior and cause false positives. This leaves neural networks sensitive to attack and sensor degradation.  

Finally, to handle complex problems, today’s AI 2.0 systems often require immobile, power-hungry, and expensive machines. In general, neural network solutions are opaque, narrow, brittle, and susceptible to changing environments or noisy and incomplete data. Biological systems do not exhibit these limitations which reinforces the need to pursue brain inspired AI systems.

What have we developed? – A Neuromorphic Machine Learning system

NIS has developed a brain-inspired AI System with AI 3.0 properties.  Developing this system required NIS to discard the accepted tenets of AI 2.0 machine learning techniques and computing architectures, and create a system built on more than a century of neuroscience research. The NIS Neuromorphic Machine Learning system incorporates computational theory, algorithms and learning models, and a novel hardware architecture, that demonstrates 3rd wave properties similar to those exhibited by intelligence in nature.

The NML system does not need millions of human curated examples to train, nor the incredible costs in energy and time of many cycles of backpropagation for model optimization. NML learning algorithms perform low-shot supervised and unsupervised continuous learning beginning with the first input vectors. The system is noise resilient thereby reducing the risk for adversarial attacks, handles missing or degraded sensor data, enables sensor fusion with consistent sensor data representation, and identifies unknown or emerging classes during inference, with explainable predictions – leading to trusted autonomy.

The NML model utilizes principles learned from neuroscience. These include hyperdimensional sparse data structures and learning algorithms that mimic digital neurons executing Hebbian learning with homeostasis control. These principles create a pattern-based AI system that exhibits properties that do not exist in AI 2.0 systems.

The learning algorithms forgo complex mathematical and statistical computation in favor of simple conditional weighted sum operations implemented and computed in hardware by thousands of digital neurons in parallel. The hardware implementation of this architecture exploits the massive parallelism of memory technology and enables the digital neurons to process tera-synaptic operations per second at tens of femto-joules per operation. This is enabled by an elegant streaming architecture unlike GP-GPUs that employ complex multiply-accumulate operations in software with transfers of instructions and data with huge Size, Weight, and Power (SWaP) killing energy, space, computational, and memory costs.

NIS has developed an AWS Platform-as-a-Service (PaaS) product offering that is FPGA accelerated on the AWS EC2 F1 cloud platform. NIS is using this offering for application development work with customers on several high impact applications.

What difference does it make? – AI 3.0 Properties

   During the DARPA VIP program NIS performed benchmark testing against the NN model executed on state-of-the-art GP-GPU hardware which demonstrates the stark contrasts between AI 2.0 neural networks and the NML system for performance, energy efficiency, low shot learning, explainable results, noise resilience, anomaly detection and missing and drifting data.

These 3rd wave capabilities will enable a new set of AI applications for the DoD where SWaP capabilities are essential such as in space and with warfighters. Applications where there is limited data to create models, or those with significant risk of adversarial attack and operating in a rapidly changing battlefield environment. Applications where it is impossible to anticipate or simulate every possible condition or combination for training, where human-like learning is needed. And applications where trusted decisions are needed when deadly effects need to be applied.

As we reflect on our team’s achievements during the course of this challenging program, it is clear that the amazing creativity and innovation of this team, combined with our experience, hard work and dedication enabled our success in exceeding the program’s goals.

These factors combined with our core belief in Natural Intelligence’s vision that the next generation of AI systems is based on a pattern-based AI model which is closely aligned with the biological model for human intelligence, will continue to drive not only our future success but the future of AI.  

To learn more about the results of our DARPA VIP project please Contact Us

Acknowledgment of Support and Disclaimer: This material is partially based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00111990071.

Request a Demo

Enter your details below and we will be in touch with you shortly.