“We want to make it very easy to expand from a single chip to a large number of chips”

- Advertisement -

Utilising RISC-V Architecture, Tenstorrent is building AI processors that can automatically and performantly compile and execute its customers’ machine learning models and scale the execution across multiple chips and servers seamlessly. CEO Jim Keller tells Yashasvini Razdan from Electronics For You that the company’s processors can communicate with each other, giving them an advantage in code writing and cost efficiency. Read on...

Q. How would you explain your technology?

A. We are developing a groundbreaking artificial intelligence (AI) computer. AI computers perform vast amounts of dense mathematical computations and perform numerous operations. Computation is similar to how the brain works. The brain consists of approximately 10,000 groups of cells that communicate with each other locally and broadly. AI problems can be thought of as highly intensive mathematics. Computation occurs in the brain when multiple cells exchange information with each other, continuously enabling its functionality.

- Advertisement -

We are constructing a computer capable of executing extensive mathematics while simultaneously communicating the results of each calculation step. This form of representation is trainable and produces intriguing results with AI software. To summarise, we are building hardware equipped with many independent processors. We are also developing software that can distribute the company’s AI program across these processors, which can natively communicate with each other. This provides us with advantages in code writing and cost efficiency for the computer and the system at large.

Q. Could you explain your basic business model?

A. We are a design company specialising in computers, AI, CPUs, and so on. Different users utilise computers in various ways, such as cloud computing, servers, chips, templates, and more. These are just a few consumer methods. Different customers have different preferences for purchasing computers at different stages. Our goal is to make our technology available to people in the manner they prefer to buy it.

Q. What are the benefits of using Tenstorrent AI processors?

A. We are significantly more cost-effective compared to some of our competitors, and we have demonstrated this to multiple individuals. Our hardware is highly optimized for this type of calculation, surpassing traditional GPUs. We are in the process of open-sourcing certain compilers that people are interested in accessing. Several components of the software stack have been proprietary for many researchers and AI programmers. By using open source, we aim to provide access to those components. Furthermore, our chips have native communication capabilities with a large number of other chips, eliminating the need for additional networks.

Q. What challenges does your hardware architecture address?

A. We are currently entering the market with a few customers. Some customers struggle to translate certain types of AI programs into high-performance software tools. Converting algorithms to programs that run efficiently requires considerable effort. Our mission is to simplify the transition from a new model to a fast program.

Another challenge is that certain models require multiple computers to run, making it difficult to expand from a single chip. We are dedicated to making it effortless to scale from a single chip to a large number of chips. While we have made some progress in this area, we continue to work on solving this challenging problem.

Q. What is the advantage of having multiple chips, instead of a single processor?

A. A single server chip is big and very expensive. It’s a single piece of silicon with multiple functions on that one piece. If we divide that chip into four pieces, each piece, called a chiplet, is smaller. You can now put the memory interface on one piece, the network interface on another piece, the IO interface on another and so on. So now each piece is simpler to build.

We’ve talked to a whole bunch of companies. Some want to get a speciality in network chiplets, or memory chiplets. We want to specialise in AI processor chiplets. If a company wishes to build a product, they can buy some chiplets as finished pieces and only customise the part that they need to, rather than having to buy all the pieces, as it’s just too hard and expensive. Chiplets can be used to break one big problem into smaller problems, and smaller companies can independently solve those problems.

Q. Are there any limitations to power with so many chips running?

A. Power management is a significant challenge when designing chips. We implement a power management system to monitor consumption and temperature, adjusting the clock and voltage to optimize performance. We have carefully chosen the size of our chip to strike the right balance.

The brain consists of clusters of cells, and the same principle applies to AI processors. If the clusters are too large, communication becomes difficult; if they’re too small, frequent communication with other components becomes necessary. We actively manage the trade-off between computation power and networking capabilities, ensuring efficient power utilization. We closely monitor power consumption and temperature to maintain limits when building a rack of computers. This enables effective control of computation across the system.

Q. What is the difference between processors built using RISC-V architecture and those built by ARM or Intel?

A. RISC-V is an open source and open standard architecture, allowing anyone to build their own RISC-V processor. On the other hand, ARM, Intel, and AMD employ proprietary architectures, controlling their production. RISC-V International serves as a coordinating body, facilitating collaboration through meetings and conferences.

We utilise the RISC-V architecture in our AI processors, with modifications to optimise their performance. We have added instructions and communication capabilities between processors, enabling fast inter-processor communication, a feature not present in ARM or Intel processors. RISC-V is instrumental in building our product, and it offers the flexibility for customisation and innovation, allowing users to build their desired products without restrictions.

Q. How is debugging impacted when using open source?

A. In the case of proprietary software, only the owning company has access to the underlying code, making it challenging for external experts to identify and fix issues. However, with open-source software, transparency allows for more effective debugging.

While using open-source software, if something doesn’t work, experts can examine the code, identify the problem, and fix it themselves. This level of visibility and accessibility significantly improves the debugging process compared to proprietary software. With open source, users have greater control and the ability to resolve issues promptly.

Q. What software programming tools are available for developers to leverage an AI processor’s capabilities?

A. Developers commonly use programming languages such as PyTorch, TensorFlow, ONNX, and JAX to write to AI models. We support these languages. Additionally, we have developed a program called TBM, which helps programmers’ interface with our hardware. We have also created our own software stack, which we have demonstrated to a small number of customers, and we plan to release it as open-source software.

Q. How are you different from your competitors, such as Groq and SambaNova Systems?

A. They’re all AI startups, and they’re picking their particular area of expertise. Groq is focused on inference processing. SambaNova is building models for customers. We want to supply computing technology for inference and training, with various ways to use that technology.

AI is in its infancy and things are changing fast. Our competitors have to map their software to the hardware manually. Manually mapping orders isn’t scalable in the long term.

The other big difference between us and them is that we’re integrating CPUs into our AI chips. So the processor can take advantage of Tenstorrent cores and CPU cores, and we’re the only ones doing that right now. What that means, ultimately, is that while the AI workloads are running, calculations which are better done by the CPU have to leave and go to the server CPU and then come back and it stops everything. With us, we have CPUs on our dies, and so speeds up everything, making it more efficient.

Q. What are the challenges of using processors in AI workloads, and how do you address them?

A. The primary challenge in AI is the rapid pace of change. The field has witnessed advancements such as inference and training, language models versus image models, and the emergence of large-scale models consisting of thousands of processors. Generative models involving multiple lookups based on previous results have also gained prominence. Each technique introduces new software requirements, necessitating a highly programmable and adaptable computer system.

Additionally, a wide range of memory capacity and power budgets is required for AI workloads. The competition in AI is constantly evolving, and the operating range encompasses diverse needs. To address these challenges, we ensure our AI processors are programmable and support various software constructs. We aim to accommodate the varied requirements of different industries, whether the AI models are deployed in edge computers, data centres, or robots.

Q. How do you ensure that your AI processors support the needs of different industries?

A. Consider a manufacturer producing screws or nails. They manufacture various sizes without specifying whether they are intended for a house, car, tractor, or fence. Similarly, we offer AI computation in different sizes: small, medium, or even the option to combine a large number of chips to create powerful machines. The choice of models depends on the customer’s requirements.

While certain variations exist, such as generative text image models differing from language models, they still share common elements. We examine the range of software constructs and ensure our processors support a wide spectrum of AI models. Whetaher the AI models are integrated into edge computers, data centres, or robots, we prioritize meeting the diverse needs of different industries.

Q. How good is it for an AI processor startup business to have a diverse product range?

A. Well, we’re a startup, so when somebody says they want to buy our product, we say yes. If you grow a lot, you might find the effort for robotics is much higher than in a data centre. You must put resources there. Many of these chip manufacturing giants make a lot of money on certain things, but there’s a whole bunch of people nobody’s supporting.

I’ve spoken to people making robots who cannot afford to put a $10,000 AI computer in a $10,000 robot. These guys require very high computational power, else they can’t make the product. We talked to a big company building a data centre and they want to use AI and grow it but it’s too expensive.

So if we can help them solve those problems, then you know, we’ll have a happy customer and we will be able to grow our business.

Q. Could you give us some insight on your partnerships with industry and academia, in AI and deep learning?

A. We have just started a joint venture with a company called Bodhi. So they want to build server products that include both AI and computation. We work closely with the foundries and the CAD companies because we build chips.

We’ve talked to quite a number of researchers, and one recently built Tenstorrent ML software stock, because researchers want the ability to have an open-source software method, which they could program right down to the hardware, and they don’t have access to that currently, with other people. So they want to make novel new models, which may be successful, but they have no way to do that.

We’ve talked to people outside of AI as well, that want to use a similar kind of computation, but they can’t write the code they want in PyTorch or TensorFlow. We want to be able to build up the software methods so people can program well.

Q. What are the future developments and advancements in AI processors?

A. AI processors will continue to get bigger. We will continue to optimise how the mathematics operations work for more efficiency. We will make them easier to program, so more people can participate and build high-performance AI models.

We’re going to see people building AI into more products. People have talked to us about licensing our AI and RISC-V technology to go off and make their product that does what they want. A big chip manufacturer may sell a standard AI card that’s quite good and very expensive, but they won’t let you change it. We think that flexibility is really important.


 

- Advertisement -

Most Popular Articles

Yashasvini Razdan
Yashasvini Razdan
Yashasvini Razdan is a journalist at EFY. She has the rare ability to write both on tech and business aspects of electronics, thanks to an insatiable thirst to know all about technology. Driven by curiosity, she collects hard facts and wields the power of her pen to simplify and disseminate information.

LEAVE A REPLY

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!

Exclusive

Crompton Greaves Starts Kitchen Appliance Manufacturing In Vadodara

0
This move aims to boost product quality and operational efficiency, enhancing the delivery of innovative, high-quality kitchen solutions to meet growing consumer demands. Crompton Greaves...

Advancing  Manufacturing Sector With New Refrigerator Plant

0
By incorporating advanced backward integration processes, this plant sets to redefine industry standards, promising improved efficiency and reduced environmental impact, while also boosting local...

EV Sales In India Fall Face First From March to April 2024

0
Electric Vehicle sales, across segments, have fallen short of the promise the sector was witnessing over the last few months. Despite the extension of FAME scheme for...

Buzz

N Chandrasekaran Appointed Chairman Of Tata Electronics

0
Randhir Thakur, the current CEO and MD of Tata Electronics and a veteran of Intel Foundry Services, was convinced to join the company in...
Tata logo

Tata Motors CFO: Shift from Early EV Adopters To Developing Market

0
Tata Motors Group CFO PB Balaji stated that the phase where the early majority enters with enthusiasm is likely coming to an end. Tata Motors...

Murugappa Group’s TIVOLT EV To Launch Montra Electric e-SCV

0
According to the company, the e-SCV, developed through extensive research and rigorous testing, is anticipated to revolutionize India's mid-mile and last-mile mobility sectors with...

Important Sectors

How East-West Power Corridor Can Extend Time For Solar-PV And Wind Power Generation

0
Embracing innovative solutions like a dedicated east-west HVDC corridor for combined solar-PV and wind power transmission can pave the way for a more sustainable...

N Chandrasekaran Appointed Chairman Of Tata Electronics

0
Randhir Thakur, the current CEO and MD of Tata Electronics and a veteran of Intel Foundry Services, was convinced to join the company in...
Tata logo

Tata Motors CFO: Shift from Early EV Adopters To Developing Market

0
Tata Motors Group CFO PB Balaji stated that the phase where the early majority enters with enthusiasm is likely coming to an end. Tata Motors...

Murugappa Group’s TIVOLT EV To Launch Montra Electric e-SCV

0
According to the company, the e-SCV, developed through extensive research and rigorous testing, is anticipated to revolutionize India's mid-mile and last-mile mobility sectors with...

Tata Power’s EV Network Drives Sustainable Future

0
Strategically placed from bustling city centres to serene tourist locales, these stations not only boost accessibility but also ensure a seamless, cashless charging experience.  Tata...

Manufacturing

Atul Aggarwal Named New Managing Director Of Sterling Tools

0
In his new position, Atul Aggarwal, who serves as a whole-time director as well, will lead the company's expansion and development into new verticals. Sterling...
Sona Comstar

Sona Comstar’s 3-Pronged Global Strategy To Outperform China In EV Parts Supply

0
Sona BLW Precision Forgings (Sona Comstar) has adopted a three-pronged strategy: leveraging the protectionist policies of the US and Indian governments to exclude Chinese...

Nokia Collaborates With Dixon Technologies Unit For Telecom Production

0
Dixon Technologies Vice Chairman and Managing Director Atul B. Lall mentioned that the product development is nearing completion, and they expect to begin commercial...

EMotorad’s Gigafactory For E-Cycles Nears Completion

0
The gigafactory for the e-cycle manufacturer is under construction in four stages. This facility will produce various parts such as batteries, motors, displays, and...

Zen Mobility Opens Micro Pod Plant In Manesar With 50K Capacity

0
In addition to manufacturing, the facility also contains sophisticated research and development centers focused on design engineering and product planning. Zen Mobility recently opened its...