Cerebras Systems was founded in 2016 “to solve problems others are afraid to tackle.” The company is backed by premier venture capitalists and technol- ogists, and raised roughly $25 million in December 2016 from Benchmark and others, according to industry sources. Cerebras was founded by the team that founded SeaMicro, a developer of a high density, low power, single-box cluster computers, which was sold to AMD for
$355 million in 2012. Rumors suggest that Cerebras may be developing deep learning processors.
Andrew Feldman, Founder and CEO (previously Corporate VP and GM at AMD via the acquisition of SeaMicro, where he served as founder and CEO. The SeaMicro team became the Data Center Server Solutions business unit inside of AMD)
Gary Lauterbach, CTO (previously Cor- porate VP & DCSS CTO at AMD and CTO and Co-founder of SeaMicro)
Bill Lynch, Ph.D., VP, Engineering (pre- viously VP, Engineering at Huawei)
Jean-Philippe Fricker, Founder and Chief System Architect (previously Senior Hardware Architect for DSSD Hardware, EMC, an AMD Fellow and
Consultant, System Architect at Pluri- bus Networks and SeaMicro)
Sean Lie, Founder and Chief Hardware Architect (previously Chief Hardware Architect at SeaMicro and Chief Archi- tect, DCSS at AMD)
DeepScale was founded in Sept. 2015 to develop perception systems for au- tonomous vehicles. The founders are Deep Learning experts from UC Berkeley with strong academic and industry track records. In October 2016, the compa- ny raised $500K in angel funding and in March 2017, DeepScale closed $3 million in seed funding from Greylock Partners, Bessemer Venture Partners, and Autotech Ventures. DeepScale says it has already attracted the interest of key players in the automotive industry.
In collaboration with researchers at UC Berkeley, DeepScale has released several open source projects:
SqueezeNet is a deep neural network (DNN) model designed to be the smallest possible while preserving reasonable accuracy on a computer vision dataset. While SqueezeNet is designed for full-im- age classification, SqueezeDet performs the task of object localization and detec-
tion. As of December 2016, SqueezeDet is simultaneously the fastest, smallest, and most accurate model on the KITTI object detection benchmark. The BeaverDam tool is a web interface for labeling data.
On a single GPU or a single-socket CPU system, DNNs can take weeks or months to train on publicly available datasets. The FireCaffe training system scales DNN training over a cluster of servers, en- abling faster time-to-solution in training, and also enables training DNNs on larger volumes of data in a fixed amount of time. Using FireCaffe, DeepScale accelerated the training of the GoogLeNet model from 3 weeks to 10 hours.
Today, most Deep Learning models re- quire high performance processors, such as NVIDIA GPUs. We believe DeepScale is developing Deep Learning technology that can run on low cost processors.
Forrest Iandola, Co-founder & CEO (previ- ously a PhD student at Berkeley focused on deep learning for computer vision)
CORNAMI isa name change from SVIRAL, a company founded in 2011 to devel- op technology for highly efficient and accelerated multi-core programming. Headquarters is in Silicon Valley, with offices in Sacramento and Boston. The company has 20 employees.
CORNAMI is an AI high performance computing company that has developed multi-core technology that efficiently uses large numbers of smaller cores in a highly concurrent, parallel manner. CORNAMI’s technology enables highly efficient multi-core processing that dra- matically changes the output-to-power performance at the petabyte data-set scale.
In September 2016, CORNAMI closed $3 million in Series B financing led by Impact Venture Capital. In addition, two technology entrepreneurs participated in the round and joined the company. Yatish Mishra joined the company as president and COO, and Denoid Tucker as VP of product and services. The company is currently raising Series C funding to complete production and finance go-to- market strategies.
Today, most applications typically use only a single core at any given time, leav- ing the unused cores idle. This is because the von Neumann programming model, which underlies modern computing, works well when dealing with a single core, but fails when dealing with multiple cores. CORNAMI argues that it has solved the problem, delivering the ability to fully utilize the idle or dark processors that exist in conventional off-the-shelf processors and systems.
CORMANI TruStream implements a highly efficient and extensible model of concurrent programming. By using a standardized runtime concurrency model called TruStream, heterogeneous multi-core processor resources are abstracted into a common homog- enous core pool. Programmers can easily implement concurrency through CORNAMI’s TruStream control struc- tures embedded in standard languages. TruStream’s programming model and as- sociated core fabric, TruFabric, improves performance and latency of fine-grained workloads by dynamically and efficiently allocating processor resources so they match changing real-time demands.
Cornami has developed a new par- allel architecture with independent decision-making capabilities at each processing core, interspersed with high- speed memory, and all interconnected by a biologically inspired network to produce a scalable “sea of cores”. It’s based on the TruStream Compute Fabric (TSCF), which is extensible across mul- tiple chips, boards, and racks, with each core being independently programma- ble. This drives higher silicon utilization and programmability without the over- head of current industry approaches.
By using Cornami’s TruStream Pro- gramming Model (TSPM), multi-core processor resources are abstracted into a common homogenous core pool. TruS- tream is implemented in both software and hardware and runs across the TSCF.
TruStream supports a direct concurren- cy abstraction, allowing applications to seamlessly scale using a fabric of cores ranging from intra-CPU to WAN-linked datacenters. TruStream supports con- currency simply and deterministically (no locks) increasing performance, with no performance falloff, as more cores are utilized. Programmers can easily implement concurrency through the TruStream control structures that are embedded in higher level standard languages.
TruStream runs on bare metal — no software stack, no virtual machine, no operating system, no context switching, no task dispatching, and no caching. The company provides a SDK supporting big data frameworks such as Apache Spark that allows applications to be programmed in higher-level languages, including C++, in order to provide an easy-to-use migration path for the ex- isting code base.
TruStream is implemented in software and runs on single or networked hetero- geneous multi-core CPUs and operating systems (x86, ARM, Android, Linux, and Mac OS). Whether purely with software or with hardware augmentation, TruS- tream increases performance, reduces power consumption and latency, and introduces additional forms of applica- tion concurrency previously unavailable.
CORNAMI’s technology can also take a topology with interconnections and ac- tions and efficiently accelerate it utilizing its “TruStream Compute Fabric” (TSCF) on its ultra-high core density, ultra-high memory bandwidth data center chip (DCIC). CORNAMI’s chip architecture is programmable and “source compatible”, unlike fixed function silicon in today’s GPU and ASICs. The initial product will support over 1000 processors in an ap- pliance that can be scaled across multiple systems.
The company initially worked with Asian customers delivering its technology in software. The company has now moved its technology into hardware for even higher performance and further protec- tion of its IP.
CORNAMI is currently working with early customers and benchmarking ap- plications in the cloud. The company is working with large customers in areas of robotics and mobile ad serving. In an early engagement with a key Wall Street bank using their algorithm, TruStream was able to increase application pro-cessor utilization from 13.4% to 96.5%, increasing overall trading system perfor- mance by 14X.
Gordon Campbell, Co-Founder, Chair- man and CEO (Founder and CEO of SEEQ, founder, Chairman and CEO of CHIPS & Technologies, President and CEO at 3dfx, founder and Executive Director of Techfarm)
Yatish Mishra, President and COO (pre- viously President, CEO and Board Member of Xand Corp, an ABRY Part- ners Private Equity owned company providing data center and cloud com- puting IT solutions)
Paul Master, Co-Founder and CTO (previ- ously CTO at Techfarm and co-founder of QuickSilver; >100 patents issued or pending, >35 professional publica- tions and 13 first pass ASIC successes)
Dr. Fred Furtek, Co-Founder and Chief Scientist (founded Concurrent Logic, the world’s second FPGA company, worked at Quicksilver)
Marty Franz, VP Engineering (previously VP of Engineering at YESvideo and Vidomim, and VP of Technology for Segasoft)
Darlene Kindler, VP Marketing (previ- ously VP, Consumer Marketing/Third Party for 3dfx. Most recently worked at LeoNovus, a distributed cloud company)
Denoid Tucker, VP Product and Services (previously with TierPoint via the 2014 acquisition of Xand, where he served as CTO)
2082 C-2 Walsh Ave. Santa Clara, CA 95050 Tel: 408.337.0070