How edge computing is revolutionising data processing

Processing today’s data explosion in central clouds and data centres isn’t delivering the cost-effective, almost instantaneous results organisations need.

How edge computing is revolutionising data processing

Processing today’s data explosion in central clouds and data centres isn’t delivering the cost-effective, almost instantaneous results organisations need.

Carl Morris
Carl MorrisSector Chief Technology Officer (CTO), Digital Industries

Not only are organisations creating more data than ever before, but they’re also sitting on a wealth of data they are yet to use.

How can organisations handle this raw data explosion and turn it into the intelligent data that drives real-time decision-making? 

Increasingly, organisations are using edge computing to move processing power closer to the ‘edge’, where the decisions are being made – away from expensive and problematic processing in central clouds or data centres. As a result, edge computing is overtaking cloud computing in popularity. From my conversations with global customers, I’d say 70% are focusing on edge computing, and only 30% are looking at the cloud.

It’s time to move away from data processing at the core

Data is increasing, putting pressure on the traditional set-up where data is sent to central resources for processing. More data means more data to transfer around, and many organisations going digital are finding this expensive. 

As data traffic grows, the bandwidth costs to support it are spiralling upwards, with no sign of stopping. Other organisations are experiencing data overload, with their data centre links getting overwhelmed with traffic, again, with no end in sight.

Continuing to send vast quantities of data to core data centres or clouds for analysis isn’t sustainable – mainly as organisations increasingly focus on the benefits of sharing and analysing large volumes of data in real-time. Local processing at the edge is the most effective way to support Internet of Things (IoT) technologies, innovative apps and new operational developments.

Other factors also encourage the idea of keeping data processing local. Data sovereignty issues can mean data has to stay in-country. Local processing is often seen as a smart option from an information safety and availability point of view. Security and privacy concerns about public clouds are also driving interest in moving data processing to the edge.

At the same time, wider knowledge about the potential of digital transformation means every team across the organisation now expects equal access to digital capabilities – no matter how far their work base is from the enterprise’s core data centres and clouds. Surely, in this connected age, being at a remote site shouldn’t affect access to rapid data analysis.

Turning raw data into intelligent data is critical to supporting real-time decision-making, and the outstanding experiences consumers and employees expect.

It’s time to make existing data work harder

There’s a new perspective on existing data, too. A greater awareness of the power of data is making organisations examine how they can extract value from the data they’ve not used in the past. This is particularly noticeable in the industrial environment where data has traditionally been used on a reactive rather than proactive basis, with industrial control systems tracking events, and, in response, the team takes action. 

Organisations are thinking bigger now. They want to explore ways to gather all their information, add some intelligence, and start to predict problems. They’re interested in using Artificial Intelligence (AI) to predict when an issue is likely to cause failure, and then working prescriptively to extend the machinery’s lifespan, until it can be serviced in a convenient window – avoiding expensive, unplanned downtime.

New AI designed to optimise operations and minimise energy use and carbon emissions is causing a lot of excitement amongst organisations seeking more sustainable ways of working – but it needs edge computing.

Edge computing is the future of intelligent data

Edge computing solves the challenges of real-time data analysis by moving data, computing, and workloads closer to where they’re needed. Edge computing’s local processing is the key to turning data into better insights, actions and results, and to doing it faster.

It keeps data local, complying with data sovereignty legislation, and keeps data away from public cloud security vulnerabilities. Workloads processed at the edge have lower latency, supporting the almost-instant analysis requirements of Internet of Things (IoT) technologies, Augmented Reality (AR), Virtual Reality (VR), and AI applications.

In essence, edge computing enables organisations to change how they approach data to extract more value from it, using intelligent data as a platform for more innovative and sustainable operations. 

How to make the move to edge computing

Deploying edge computing is more than plugging in some equipment and being ready to go. It has to happen as part of a holistic review of the organisation’s infrastructure that identifies how the network needs to be refreshed to support it.

Some organisations look at their ageing, flat networks and wonder where to start. They worry about the level of investment it will take.

My advice is to start small, with a single compelling use case, such as running an AI-powered app that will identify how to save 10% on your energy bills. Then, use that tangible return on investment to cross-subsidise further investment into fixing and updating your network. By thinking carefully about the applications you want to run at the edge, you can quantify the returns and gradually establish an edge-ready network.

Related content

woman working in warehouse
June 06, 2022
The tech breakthrough that’s driving a greener future for manufacturing
bring it on neon sign
June 22, 2022
Living life on the edge
Business people talking in server room
June 02, 2022
Eight ways edge computing can future-proof your organisation