Extracting value from data: analyse, apply intelligence, action

  • The human touch
  • Leveraging standard hardware
  • Pulling it together
  • Intel IT’s case study

This fourth and final article in this series on the data management lifecycle looks at the part that has all buzz around it – using analytics and artificial intelligence (AI) to extract actionable intelligence. As ever, it’s essential to see technology only as a means to achieve business goals, not an end in itself.

AI is a computer system designed to perform tasks that usually need human intelligence, such as visual perception for self-driving vehicles, speech recognition (Amazon’s Alexa and Telefónica’s AURA), and translating languages (Google Translate). As José Manuel de Arce explained, “AI is about working with data and doing with it what humans would do, without the errors and faster”. He is Deputy Director OSS/BSS Infrastructure, Work Space and OSS Technology at Telefónica International Wholesale Services.

Although often AI is often used interchangeably with Big Data and analytics, in reality AI is a subset. Where analytics is typically used to identify patterns or trends, AI involves self-learning algorithms that can generate predictions and can make assumptions based on the data.

So a fundamental question that must be applied to each of the desired outcomes is do we need AI to achieve it?

The human touch

De Arce’s description spoke to a common fear about AI, that humans will lose control of systems and processes, even though machines can only do what they are programmed to do by humans. In many instances, humans will never be entirely out of the loop. For example, one of those decisions is the location of analytics engines, with or without AI functionality, which depends on the application.

American Airlines relies on real-time analytics to improve customer service when customers choose to contact the company via social networks like Twitter and Facebook to analyse data pulled from multiple sources. Even so, a person makes the ultimate decision about the best way to resolve customers’ issues.

Some of the concerns about AI are closely linked to another issue – the dire, global shortage of people with the right skills for understanding, developing and deploying AI. Telcos are scrambling to gain the expertise they need to leverage AI, with their approaches varying from retraining staff to sponsoring students through university and other programmes.

Importantly, both the difficulties of leveraging AI and its potential in so very many areas for operators are recognised, and have sparked innovation on many fronts. This includes a flurry of startups that are gaining momentum: in 2019, Intel Capital invested $117 million in disruptive new firms and some are focused on AI and communications.

Leveraging standard hardware

We have touched on the differences between Big Data and AI, but even within AI, there are different categories. Machine Learning (ML) is the broadest category and uses structured data, whereas a newer category called Deep Learning (DL) expands ML by processing information in layers and is typically based on neural networks. This means the result or output from one layer becomes the input for the next.

ML is computationally intensive and while standard computing platforms can handle this workload easily, DL increases the computational requirements significantly. For this reason, second-generation Intel Xeon Scalable processors have Intel DL Boost embedded in them and are the only general-purpose processor with built-in AI acceleration specially for DL.

Intel has also spent years optimising popular DL frameworks like TensorFlow, Pytorch and PaddlePaddle to leverage the embedded AI instruction sets in the Intel Xeon Scalable processors. The journey started with the first-generation Intel Xeon Scalable processors leveraging the AVX512 instructions. It continues with DL Boost in the second generation, and the results so far are spectacular, particularly for inferencing. Hence for the many enterprises that are already running Intel Xeon Scalable processor-based data centres or cloud environments, general purpose computing offers a good foundation for any AI journey.

Pulling it together

To bring together all this innovation and optimisation for AI, Intel has created the Intel Select Solution for BigDL on open source Apache Spark (the open source, unified analytics engine). The solution’s standardised components include Intel Xeon Scalable processors, Intel solid-state drives and Ethernet Network Adaptors. It can help operators deploy and manage scalable solutions because it accommodates the addition of hundreds of nodes without degrading performance or changing the fundamental architecture.

BigDL is a distributed DL library designed to augment the storage and compute capacities of Apache Spark, and optimise the development of DL, for both on-premises infrastructures and hybrid-cloud models.

BigDL supports the development of new DL models for training and serving on the same big data cluster, as well as models from other popular frameworks, such as TensorFlow and Keras. This allows operators to import trained models into the BigDL framework or use models trained in BigDL in other frameworks.

What’s more, BigDL is supported by Analytics Zoo, which provides a unified AI platform and pipeline with built-in reference use cases, which is another big help in simplifying the development of AI solutions. Analytics Zoo has an extensible architecture, built on top of the open source platform of Apache Spark, TensorFlow, Keras, and BigDL, to support more libraries and frameworks.

In short, the advantages of Intel Select Solution for BigDL on open source Apache Spark are:

  •  an ML/DL infrastructure with scalable storage and compute;
  • optimised total cost of ownership because it runs on general purpose hardware;
  • where appropriate, AI can be deployed where the data is generated and stored; and
  • faster time to market by using a turnkey solution which includes a development toolset optimised for the most popular software libraries.

Intel IT’s case study

As Intel itself digitally transforms from being a PC-centric company to a data-centric one, like operators which are also data-rich, it needs to increase the speed of gaining insights into its vast amounts of data to maintain a competitive advantage. This also gives it first-hand experience and expertise that it can leverage to help its customers, and provide proof of the power of leveraging standardised hardware.

Its supply chain is complex (with 600 facilities in 63 countries, plus 19,000 suppliers and 2,000 customers) and needs fast, data-driven decisions to optimise orders taking, procuring resources, manufacturing, testing and the delivery of products.

Intel IT uses SAP HANA 2 to operate, optimise, and innovate within its global supply chain and recently compared its SAP HANA 2 finance analytics cluster (that is, three four-year-old servers), with a single server based on second-generation Intel Xeon Scalable processors and new Intel Optane DC persistent memory.

Each older server has 2TB of DRAM (6TB in total) while the new server was configured with 1.5TB of DRAM and 3TB of Intel Optane DC persistent memory (4.5TB of memory in total). This single newer server’s performance was 2.4 times better at providing faster answers to business questions.

In summary

This final article in our series of four about the four major phases of the data lifecycle – acquisition, preparation, analysis and action – highlights how they form a cyclical, rather than a linear, process. Extracting value from data is never a completed task because of new technologies that can be leverage, CoSPs’ changing business goals and operational models, new insights mined from data, new markets and challenges, and more.

This is why, as we advocated in the first of the series, COSPs having a good understanding of the outcomes they want to achieve is essential before embarking on the next cycle. As they progress, they must revisit those outcomes to see if and how they have changed before evolving the lifecycle and making necessary adjustments. If there is a founding principle for releasing ever-more value from data, it is that that each stage in the lifecycle has a fundamental impact on the next.

This is the fourth article in a series of four looking at the data lifecycle. You can read about the start of the cycle here.

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.