Excessive APIs are killing your website and app

Published on the 02/07/2019 | Written by Will Barrera


APIs killing your website

How many API calls and microservices is too many?…

Availability is always a top concern for people who manage internet-facing websites and services. Slow websites aren’t much more fun than sites that you can’t get to at all.

Despite improvements in internet connectivity and speeds, the incidence of slowdowns and site or app unavailability is rising.

Complexity is a big part of the problem.

The internet has often been referred to as a ‘black box’ when it comes to troubleshooting, which historically has involved a lot of finger pointing and everyone having a theory involving the problem not being their fault. There are a lot of tools that can tell you a site is fully or partially down, but very few that can help you figure out why.

As the architectures of websites and apps have evolved, complexity has only increased.

Enterprises and SaaS providers are increasingly using third-party APIs and cloud services as part of their web and application architectures.

The weight of all of the dependencies sending and retrieving data from external sources means websites are going to continue to get slower.

This distributed, microservices-based approach to building applications not only provides best-of-breed functions, but also allows companies to quickly consume and deliver new services.

This could enable new features to quickly be added to a website or app.

However, since these add-on services are not internally operated, isolating the source of a problem when something goes wrong can be challenging.

When that happens, and there are multiple APIs and interdependencies in use, the question of whether the application or the network is at fault will become “Which application?” and “Which network?”.

And where problems are simply described as slow performance – a subjective measure that may not uniformly impact all users – the job of the site or app owner is made even more difficult.

Kiss KISS goodbye
Applications today might leverage dozens of APIs to handle services such as messaging and voice, maps and payments, while also connecting to cloud-based services such as CRM, ERP and analytics.

But websites are getting weighed down by the addition of many externally-hosted components.

Even a seemingly simple ‘Buy Now’ function on an e-commerce site will invoke many external services, including payment gateways, CRM, analytics, inventory and fulfilment.

These API calls might be considered necessary to trigger delivery and tracking of the purchased item and to record the purchase against the customer’s profile in order to better understand their buying habits and recommend other potential purchases the next time they visit the e-store.

But the weight of all of the dependencies sending and retrieving data from external sources means websites are going to continue to get slower, while at the same time their risk surface (from a security and data protection standpoint) increases.

Putting a number on it
There are no hard and fast rules around how many is too many when it comes to the API calls that websites and mobile apps make in order to function.

However, developer forums are full of questions on where to draw the line.

“Technically speaking there is no such thing as too much APIs. I have worked on apps which had as little as 10 APIs to more than 40 APIs. As long as you need some data from remote server you need an API,” Anil Deshpande, a corporate Android app coder, said on Quora.

Other developers have similarly confronted the issue of determining the ideal number of API calls per screen and the (fallout when too many calls start breaking the website or app).

What to do about it
Understanding the tradeoff of function over user experience and knowing how every third-party web or app component impacts performance will get even more critical to enterprises and SaaS providers.

In addition, active testing – showing how each page component impacts performance (from a user’s vantage) – will become increasingly critical for DevOps and systems reliability engineers.

This is important given the current focus on customer experience in organisations. A recent survey by Gartner showed 75 percent of organisations increased customer experience technology investments in 2018.

One of our biggest takeaways from 2018 is that cloud adoption and the digitisation of customer experiences are mainstreaming – and it’s high time for IT ops teams to catch up.

One way they can do that is by benchmarking their performance against other website operators and app makers. Objective benchmarking data and digital experience management tools are two ways to help operators make trade-off decisions as they seek to continuously improve customer digital experience.

Will Barrera

ABOUT WILL BARRERA//

Will Barrera, ANZ Regional Sales Manager at ThousandEyes

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

No items found
Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere