Serverless Computing – Pros and Cons

Serverless computing is the fastest-growing cloud service model right now, with an annual growth rate of 75%, according to RightScale’s Cloud report. That’s hardly surprising, given the technology’s ability to lower costs, reduce operational complexity, and increase DevOps efficiencies.

So, as the calendar was poised to turn to a new year, we asked several experts what to expect next from this rising technology. While we received a wide range of answers, everyone agreed that serverless will mature and see even greater adoption rates in 2020.

Why use Serverless computing?

Serverless computing offers a number of advantages over traditional cloud-based or server-centric infrastructure. For many developers, serverless architectures offer greater scalability, more flexibility, and quicker time to release, all at a reduced cost. With serverless architectures, developers do not need to worry about purchasing, provisioning, and managing backend servers. However, serverless computing is not a magic bullet for all web application developers.

What are the advantages of serverless computing?

No server management is necessary

Although ‘serverless’ computing does actually take place on servers, developers never have to deal with the servers. They are managed by the vendor. This can reduce the investment necessary in DevOps, which lowers expenses, and it also frees up developers to create and expand their applications without being constrained by server capacity.

Developers are only charged for the server space they use, reducing cost

As in a ‘pay-as-you-go’ phone plan, developers are only charged for what they use. Code only runs when backend functions are needed by the serverless application, and the code automatically scales up as needed. Provisioning is dynamic, precise, and real-time. Some services are so exact that they break their charges down into 100-millisecond increments. In contrast, in a traditional ‘server-full’ architecture, developers have to project in advance how much server capacity they will need and then purchase that capacity, whether they end up using it or not.

Serverless architectures are inherently scalable

Imagine if the post office could somehow magically add and decommission delivery trucks at will, increasing the size of its fleet as the amount of mail spikes (say, just before Mother’s Day) and decreasing its fleet for times when fewer deliveries are necessary. That’s essentially what serverless applications are able to do.
Applications built with a serverless infrastructure will scale automatically as the user base grows or usage increases. If a function needs to be run in multiple instances, the vendor’s servers will start up, run, and end them as they are needed, often using containers (the functions start up more quickly if they have been run recently – see ‘Performance may be affected’ below). As a result, a serverless application will be able to handle an unusually high number of requests just as well as it can process a single request from a single user. A traditionally structured application with a fixed amount of server space can be overwhelmed by a sudden increase in usage.

Quick deployments and updates are possible

Using a serverless infrastructure, there is no need to upload code to servers or do any backend configuration in order to release a working version of an application. Developers can very quickly upload bits of code and release a new product. They can upload code all at once or one function at a time, since the application is not a single monolithic stack but rather a collection of functions provisioned by the vendor.
This also makes it possible to quickly update, patch, fix, or add new features to an application. It is not necessary to make changes to the whole application; instead, developers can update the application one function at a time.

Code can run closer to the end user, decreasing latency

Because the application is not hosted on an origin server, its code can be run from anywhere. It is therefore possible, depending on the vendor used, to run application functions on servers that are close to the end user. This reduces latency because requests from the user no longer have to travel all the way to an origin server.

What are the disadvantages of serverless computing?

Testing and debugging become more challenging

It is difficult to replicate the serverless environment in order to see how code will actually perform once deployed. Debugging is more complicated because developers do not have visibility into backend processes, and because the application is broken up into separate, smaller functions.

Serverless computing introduces new security concerns

When vendors run the entire backend, it may not be possible to fully vet their security, which can especially be a problem for applications that handle personal or sensitive data.
Because companies are not assigned their own discrete physical servers, serverless providers will often be running code from several of their customers on a single server at any given time. This issue of sharing machinery with other parties is known as ‘multi-tenancy’ – think of several companies trying to lease and work in a single office at the same time. Multi-tenancy can affect application performance and, if the multi-tenant servers are not configured properly, could result in data exposure. Multi-tenancy has little to no impact for networks that sandbox functions correctly and have powerful enough infrastructure.

Serverless architectures are not built for long-running processes

This limits the kinds of applications that can cost-effectively run in a serverless architecture. Because serverless providers charge for the amount of time code is running, it may cost more to run an application with long-running processes in a serverless infrastructure compared to a traditional one.

Performance may be affected

Because it’s not constantly running, serverless code may need to ‘boot up’ when it is used. This startup time may degrade performance. However, if a piece of code is used regularly, the serverless provider will keep it ready to be activated – a request for this ready-to-go code is called a ‘warm start.’ A request for code that hasn’t been used in a while is called a ‘cold start.’

Vendor lock-in is a risk

Allowing a vendor to provide all backend services for an application inevitably increases reliance on that vendor. Setting up a serverless architecture with one vendor can make it difficult to switch vendors if necessary, especially since each vendor offers slightly different features and workflows

Who should use a serverless architecture?

Developers who want to decrease their go-to-market time and build lightweight, flexible applications that can be expanded or updated quickly may benefit greatly from serverless computing.
Serverless architectures will reduce costs for applications that see inconsistent usage, with peak periods alternating with times of little to no traffic. For such applications, purchasing a server or a block of servers that are constantly running and always available, even when unused, may be a waste of resources. A serverless setup will respond instantly when needed and will not incur costs when at rest.
Also, developers who want to push some or all of their application functions close to end users for reduced latency will require at least a partially serverless architecture, since doing so necessitates moving some processes out of the origin server.

When should developers avoid using a serverless architecture?

There are cases when it makes more sense, both from a cost perspective and from a system architecture perspective, to use dedicated servers that are either self-managed or offered as a service. For instance, large applications with a fairly constant, predictable workload may require a traditional setup, and in such cases the traditional setup is probably less expensive. Additionally, it may be prohibitively difficult to migrate legacy applications to a new infrastructure with an entirely different architecture.

The Reliability of Artificial Intelligence

Cognition is “the mental action or human intelligence processes of acquiring knowledge and understanding through thought, experience, and the senses”. Artificial Intelligence, therefore, can be defined as the simulation and automation of cognition using computers. Specifically applications of AI include expert systems, speech recognition and machine vision.

Today, self-learning systems, otherwise known as artificial intelligence or ‘AI’, are changing the way architecture is practiced, as they do our daily lives, whether or not we realize it. If you are reading this on a laptop or tablet or mobile, then you are directly engaging with a number of integrated AI systems, now so embedded in our the way we use technology, they often go unnoticed.

As an industry, AI is growing at an exponential rate, now understood to be on track to be worth $70bn globally by 2020. This is in part due to constant innovation in the speed of microprocessors, which in turn increases the volume of data that can be gathered and stored.

The following diagram depicts typical AI System workflow, AI frameworks provide the building blocks for data scientists and developers to design, train and validate AI models through a high-level programming interface and without getting into the nitty-gritty of the underlying algorithms. So, the reliability of AI is critically bound to two aspects – Data (sources & preparation ) and Inference (Training & Algorithms)

Perhaps, AI brings impressive applications, with remarkable benefits for all of us; but there are notable unanswered questions with social, political or ethical facets.

Even though the hype around AI is sky high, has the technology proven to be useful for enterprises?

As AI enables companies to move from experimental phase to new business models, a new study indicates errors can be reduced through careful regulation of human organizations, systems and enterprises.

This recent study by Thomas G Dietterich of Oregon State University reviews what are the properties of highly reliable organizations and how enterprises can modify or regulate the scale of AI? The researcher says, “The more powerful technology becomes, the more it magnifies design errors and human failures.”

And the responsibility lies with tech behemoths, the new High-Reliability Organizations (HROs) that are piloting AI applications to minimize risks. Most of the bias and errors in AI systems are built in by humans and as companies across the globe build great AI applications in various fields, the potential for human errors will also increase.

High-End Technology and Its Consequences

As AI technologies automate existing applications and create new opportunities and breakthroughs that never existed before, it also comes with its own set of risks, which are inevitable. The study cites Charles Perrow’s book Normal Accidents, written after a massive nuclear accident that delved into organizations which worked on advanced technologies like nuclear power plants, aircraft carriers, and electrical power grid among others. The team summarized five features of High-Reliability Organizations (HRO):

Preoccupation with failure: HROs know and understand that there exist new failure modes that they have not yet known or observed.

Reluctance to simplify interpretations: HROs build an ensemble of expertise and people so multiple interpretations can be generated for any event.

Sensitivity to operations: HROs maintains human resources who have deep situational awareness.

Commitment to resilience: Great enterprises and teams practice recombining existing actions. They have great procedures and acquire high skills very fast.

Under-specification of structures: HROs give power to each and every team member to make important decisions related to their expertise.

AI Systems and Human Organizations

There are some lessons that the researchers draw from many circles where advanced technology was deployed. Traditionally, AI history has been peppered with peaks and valleys and currently, the technology is seeing an exuberant time, as noted by a senior executive. As enterprises move to bridge the gap between hype and reality by developing cutting-edge applications, here’s a primer for organizations to dial down the risks associated with AI.

The goal of human organizations should be to create combined human-machine systems that become high-reliability organizations quickly. The researcher says AI systems must continuously monitor their own behavior, the behavior of the human team, and the behavior of the environment to check for anomalies, near misses, and unanticipated side effects of actions.

Organizations should avoid deploying AI technology where human organizations cannot be trusted to achieve high reliability.

AI systems should be continuously monitoring the functioning of the human organization. This monitoring should be done to check for threats to high reliability of the human organizations.

In summary, as with previous technological advances, AI technology increases the risk that failures in human organizations and actions will be magnified by the technology with devastating consequences. To avoid such catastrophic failures, the combined human and AI organization must achieve high reliability.