TelecomTV TelecomTV
  • News
  • Videos
  • Channels
  • Events
  • Network Partners
  • Industry Insights
  • Directory
  • Newsletters
  • Digital Platforms and Services
  • Open RAN
  • Cloud Native Telco
  • Telcos and Public Cloud
  • The Green Network
  • Private Networks
  • Open Telco Infra
  • 5G Evolution
  • Access Evolution
  • Edgenomics
  • Network Automation
  • 6G Research and Innovation
  • Security
  • More Topics
  • Network Partners
  • Industry Insights
  • Directory
  • Newsletters
  • |
  • About
  • Contact
  • |
  • Connect with us
  • Digital Platforms and Services
  • Open RAN
  • Cloud Native Telco
  • Telcos and Public Cloud
  • The Green Network
  • Private Networks
  • Open Telco Infra
  • 5G Evolution
  • Access Evolution
  • Edgenomics
  • Network Automation
  • 6G Research & Innovation
  • Security
  • Connect with TelecomTV
  • About
  • Privacy
  • Help
  • Contact
  • Sign In Register Subscribe
    • Subscribe
    • Sign In
    • Register
  • Search

Transformation

Transformation

Fast and efficient memcomputers will permit the storage and processing of data at the same time

Martyn Warwick
By Martyn Warwick

Jul 7, 2015

UCSD © Flickr/cc-licence/Amanda Carson

UCSD © Flickr/cc-licence/Amanda Carson

A research team in the US claims to have proven the viability of the concept of the  'memcomputer, a new architecture that permits machines both to store data and process them simultaneously. If it works the dreaded 'data shuffle', whereby information is shunted in millions of iterative consecutive transactions between the memory and the CPU may become a thing of the past. The promise is that memcomputers will be very speedy and very energy and resource efficient.

If you'll forgive the pun, one of the major bugbears with the vast majority of computers are the inefficiencies inherent in a venerable architecture that requires information to be stored in one physical location - the memory - and then processes that information in another - the CPU. This slows down computers (to increasingly unacceptable levels as data expands and proliferates), uses a lot of electricity and other resources and generates a lot of heat and wastage.

However, scientists at the University of California, San Diego (UCSD) have designed the "memcomputer" a new kind of 'memory-crunching' computer that both stores information and processes it in the same place. This is referred to as a 'collective state'.

At the moment the prototype is no more than a Proof Of Concept (POC) but the team, headed-up by Professor Massimiliano Di Ventra, claims that the new design, based in part on the way the human brain works as it simultaneously takes and and processes data, could be constructed as a test-bed now and later manufactured in quantity when problems with the new architecture have been sorted-out.

And there's the rub. There certainly are a lot of problems to be solved before the memcomputer can graduate from POC to become a manufactured reality.

A viable alternative to quantum computing?

The research was published late last week in the academic journal 'Science Advances' but, in fact, memcomputers are not an entirely new concept having first been posited as a theory by the journal 'Popular Mechanics' back in the 1970s and the search for viable alternative architectures to the extant Von Neumann and later Harvard and Harvard-Hybrid architectures has been lengthy, expensive and inconclusive and has led many a scientist up many a technological cul-de sac.

Most hopes and expectations are currently invested in 'quantum computing', which, in essence, relies on the strange behaviours and properties of atomic-level physics. It is generally agreed that such devices will be built one day but would require very specific and rather fragile environments in which to operate and anyway, we simply don't have the technology to provide them yet.

Thus the search for alternative architectures continues as we work current computers harder and harder. As more and more data is throughput the computers we have today it is taking longer and longer for them to come up with the answers.

That's because standard Von Neumann-architecture computers solve problems by following a step-by-step iterative process whereby data is shuffled back and forth millions, billions and even trillions of times. It happens at high speed but so many times in succession that the milliseconds can mount up with the result that some problems can take even the most powerful computers days, weeks, months and even years to reach an answer.

However, a memcomputer might take only a few thousands, hundreds of thousands or just a few million iterations to answer the same problem. What's more, some extremely complex problems might actually be solvable in a single step.

Back to basics

Professor Di Ventra, says that "memcomputers can be built with standard technology and operate at room temperature. This puts them on a completely different level of simplicity and cost in manufacturing compared to quantum computers."

The research team built their POC memcomputer by going back to basics and rather then using the classic and tradition silicon transistors used an array of six "memprocessors".

A transistor's usual function is to act as a programmed gateway either to permit energy through or to stop it in its tracks. A memprocessor does the same but also physically changes some of a transistor's properties such as its electrical resistance. What's more, even if or when a memprocessor is disconnected from a power source, it retains the physical change. So, it can be a CPU and also keep resistance-laden information allowing it to both store data and process it at the same time.

On the downside, the prototype memcomputer is analogue, not digital, and as such is highly susceptible to interference from noise and that severely limits the number of memprocessors that can be arrayed. Professor Di Ventra admits and accepts this but says,"Memcomputers can be made also digital therefore less susceptible to noise and hence they are scalable to a large number of units. Digital memcomputers will then hold great promise to complement present computers in those tasks in which they are not efficient."

It's early days of course but things do look promising.

Related Topics
  • Analysis & Opinion,
  • News,
  • Transformation

More Like This

DSP Leaders World Forum

The latest carrier conundrum: How to enhance service innovation and delivery while improving energy efficiency

May 23, 2023

Digital Platforms and Services

Vodafone CTO: Insourcing is key to the techco transition

May 23, 2023

DSP Leaders World Forum

From telco to techco: The impact of next-gen operations on skills, talent acquisition and retention

May 23, 2023

DSP Leaders Forum 2023

How telcos can succeed with AI and automation

May 22, 2023

DSP Leaders Forum 2023

The cloud-native telco transformation journey

May 22, 2023

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.

Subscribe

Top Picks

Highlights of our content from across TelecomTV today

1:17:33

From telco to techco: The impact of next-gen operations on skills, talent acquisition and retention

1:14:31

Achieving maximum operational efficiency: How service providers can best operate at speed and scale

1:14:20

Why data and APIs are key to implementing the vision of the digital services provider

TelecomTV
Company
  • About Us
  • Media Kit
  • Contact Us
Our Brands
  • DSP Leaders World Forum
  • Great Telco Debate
  • TelecomTV Events
Get In Touch
[email protected]
+44 (0) 207 448 1070
Connect With Us

  • Privacy
  • Cookies
  • Terms of Use
  • Legal Notices
  • Help

TelecomTV is produced by the team at Decisive Media.

© Decisive Media Limited 2023. All rights reserved. All brands and products are the trademarks of their respective holder(s).