The Difference between Software Architecture and Design

The difference between software architecture and software design is not really known to most of the people. Even for developers, the line is often shadowy and they might mix up elements of software architecture patterns and design patterns. In this blog, I would like to simplify these concepts and explain the differences between software design and software architecture. In addition, I will show you why it is important for a developer to know about software architecture and software design.

The Definition of Software Architecture

In simple words, software architecture is the process of converting software characteristics such as flexibility, scalability, feasibility, reusability, and security into a structured solution that meets the technical and the business expectations. This definition leads us to ask about the characteristics of a software that can affect a software architecture design. There is a long list of characteristics which mainly represent the business, functional & non-functional or the operational requirements, in addition to the technical requirements.

The Characteristics of Software Architecture

As explained, software characteristics describe the requirements and the expectations of a software in operational and technical levels. Thus, when a product owner says they are competing in a rapidly changing markets, and they should adapt their business model quickly. The software should be “extendable, modular and maintainable” if a business deals with urgent requests that need to be completed successfully in the matter of time. As a software architect, you should note that the performance and low fault tolerance, scalability and reliability are your key characteristics. Now, after defining the previous characteristics the business owner tells you that they have a limited budget for that project, another characteristic comes up here which is “the feasibility”.

Software Design

While software architecture is responsible for the skeleton and the high-level infrastructure of a software, the software design is responsible for the code level design such as, what each module is doing, the classes scope, and the functions purposes, etc.

If you are a developer, it is important for you to know what the SOLID principles is and how a design pattern should solve regular problems.

Single Responsibility Principle: means that each class has to have one single purpose, a responsibility and a reason to change.

Open Closed Principle: a class should be open for extension, but closed for modification. In simple words, you should be able to add more functionality to the class but do not edit current functions in a way that breaks existing code that uses it.

Liskov substitution Principle: this principle guides the developer to use inheritance in a way that will not break the application logic at any point. Thus, if a child class called “XyClass” inherits from a parent class “AbClass”, the child class shall not replicate a functionality of the parent class in a way that change the behavior parent class. So you can easily use the object of XyClass instead of the object of AbClass without breaking the application logic.

Interface Segregation Principle: Simply, since a class can implement multiple interfaces, then structure your code in a way that a class will never be forced to implement a function that is not important to its purpose. So, categorize your interfaces.

Dependency Inversion Principle: If you ever followed TDD for your application development, then you know how decoupling your code is important for testability and modularity. In other words, If a certain Class “ex: Purchase” depends on “Users” Class then the User object instantiation should come from outside the “Purchase” class.

Remember there is a difference between a software architect and a software developer. Software architects have usually experienced team leaders, who have good knowledge about existing solutions which help them make right decisions in the planning phase. A software developer should know more about software design and enough about software architecture to make internal communication easier within the team.

Microservices Patterns

What are microservices?

Microservices – also known as the microservice architecture – is an architectural style that structures an application as a collection of services that are

  • Highly maintainable and testable
  • Loosely coupled
  • Independently deployable
  • Organized around business capabilities
  • Owned by a small team
  • Design Patterns for Microservices

Microservice architecture has become the de facto choice for modern application development. Though it solves certain problems, it is not a silver bullet. It has several drawbacks and when using this architecture, there are numerous issues that must be addressed. This brings about the need to learn common patterns in these problems and solve them with reusable solutions. Thus, design patterns for microservices need to be discussed. Before we dive into the design patterns, we need to understand on what principles microservice architecture has been built:

  • Scalability
  • Availability
  • Resiliency
  • Independent, autonomous
  • Decentralized governance
  • Failure isolation
  • Auto-Provisioning
  • Continuous delivery through DevOps

The microservice architecture is not a silver bullet. It has several drawbacks. Moreover, when using this architecture there are numerous issues that you must address.

The microservice architecture pattern language is a collection of patterns for applying the microservice architecture. It has two goals:

  • The pattern language enables you to decide whether microservices are a good fit for your application.
  • The pattern language enables you to use the microservice architecture successfully.

Applying all these principles brings several challenges and issues. Let’s discuss those problems and their solutions.

Application architecture patterns

1. Decomposition

How to decompose an application into services?

  • Decompose by business capability – define services corresponding to business capabilities
  • Decompose by subdomain – define services corresponding to DDD subdomains
  • Self-contained Service – design services to handle synchronous requests without waiting for other services to respond
  • Service per team – Each service is owned by a team, which has sole responsibility for making changes.

2. Data management

How to maintain data consistency and implement queries?

  • Database per Service – each service has its own private database
  • Shared database – services share a database
  • Saga – use sagas, which a sequences of local transactions, to maintain data consistency across services
  • API Composition – implement queries by invoking the services that own the data and performing an in-memory join
  • CQRS – implement queries by maintaining one or more materialized views that can be efficiently queried
  • Domain event – publish an event whenever data changes
  • Event sourcing – persist aggregates as a sequence of events

3. Transactional messaging

How to publish messages as part of a database transaction?

  • Transactional outbox
  • Transaction log tailing
  • Polling publisher

4. Testing

How to make testing easier?

  • Consumer-driven contract test – a test suite for a service that is written by the developers of another service that consumes it
  • Consumer-side contract test – a test suite for a service client (e.g. another service) that verifies that it can communicate with the service
  • Service component test – a test suite that tests a service in isolation using test doubles for any services that it invokes

5. Deployment patterns

How to deploy an application’s services?

  • Multiple service instances per host – deploy multiple service instances on a single host
  • Service instance per host – deploy each service instance in its own host
  • Service instance per VM – deploy each service instance in its VM
  • Service instance per Container – deploy each service instance in its container
  • Serverless deployment – deploy a service using serverless deployment platform
  • Service deployment platform – deploy services using a highly automated deployment platform that provides a service abstraction

6. Cross cutting concerns

How to handle cross cutting concerns?

  • Microservice chassis – a framework that handles cross-cutting concerns and simplifies the development of services
  • Externalized configuration – externalize all configuration such as database location and credentials

Communication patterns

1. Style

Which communication mechanisms do services use to communicate with each other and their external clients?

  • Remote Procedure Invocation – use an RPI-based protocol for inter-service communication
  • Messaging – use asynchronous messaging for inter-service communication
  • Domain-specific protocol – use a domain-specific protocol

2. External API

How do external clients communicate with the services?

  • API gateway – a service that provides each client with unified interface to services
  • Backend for front-end – a separate API gateway for each kind of client

3. Service discovery

How does the client of an RPI-based service discover the network location of a service instance?

  • Client-side discovery – client queries a service registry to discover the locations of service instances
  • Server-side discovery – router queries a service registry to discover the locations of service instances
  • Service registry – a database of service instance locations
  • Self-registration – service instance registers itself with the service registry
  • 3rd party registration – a 3rd party registers a service instance with the service registry

4. Reliability

How to prevent a network or service failure from cascading to other services?

  • Circuit Breaker – invoke a remote service via a proxy that fails immediately when the failure rate of the remote call exceeds a threshold

5. Security

How to communicate the identity of the requestor to the services that handle the request?

  • Access Token – a token that securely stores information about user that is exchanged between services

6. Observability

How to understand the behavior of an application and troubleshoot problems?

  • Log aggregation – aggregate application logs
  • Application metrics – instrument a service’s code to gather statistics about operations
  • Audit logging – record user activity in a database
  • Distributed tracing – instrument services with code that assigns each external request an unique identifier that is passed between services. Record information (e.g. start time, end time) about the work (e.g. service requests) performed when handling the external request in a centralized service
  • Exception tracking – report all exceptions to a centralized exception tracking service that aggregates and tracks exceptions and notifies developers.
  • Health check API – service API (e.g. HTTP endpoint) that returns the health of the service and can be pinged, for example, by a monitoring service
  • Log deployments and changes

UI patterns

How to implement a UI screen or page that displays data from multiple services?

  • Server-side page fragment composition – build a webpage on the server by composing HTML fragments generated by multiple, business capability/subdomain-specific web applications
  • Client-side UI composition – Build a UI on the client by composing UI fragments rendered by multiple, business capability/subdomain-specific UI components

Serverless Computing – Pros and Cons

Serverless computing is the fastest-growing cloud service model right now, with an annual growth rate of 75%, according to RightScale’s Cloud report. That’s hardly surprising, given the technology’s ability to lower costs, reduce operational complexity, and increase DevOps efficiencies.

So, as the calendar was poised to turn to a new year, we asked several experts what to expect next from this rising technology. While we received a wide range of answers, everyone agreed that serverless will mature and see even greater adoption rates in 2020.

Why use Serverless computing?

Serverless computing offers a number of advantages over traditional cloud-based or server-centric infrastructure. For many developers, serverless architectures offer greater scalability, more flexibility, and quicker time to release, all at a reduced cost. With serverless architectures, developers do not need to worry about purchasing, provisioning, and managing backend servers. However, serverless computing is not a magic bullet for all web application developers.

What are the advantages of serverless computing?

No server management is necessary

Although ‘serverless’ computing does actually take place on servers, developers never have to deal with the servers. They are managed by the vendor. This can reduce the investment necessary in DevOps, which lowers expenses, and it also frees up developers to create and expand their applications without being constrained by server capacity.

Developers are only charged for the server space they use, reducing cost

As in a ‘pay-as-you-go’ phone plan, developers are only charged for what they use. Code only runs when backend functions are needed by the serverless application, and the code automatically scales up as needed. Provisioning is dynamic, precise, and real-time. Some services are so exact that they break their charges down into 100-millisecond increments. In contrast, in a traditional ‘server-full’ architecture, developers have to project in advance how much server capacity they will need and then purchase that capacity, whether they end up using it or not.

Serverless architectures are inherently scalable

Imagine if the post office could somehow magically add and decommission delivery trucks at will, increasing the size of its fleet as the amount of mail spikes (say, just before Mother’s Day) and decreasing its fleet for times when fewer deliveries are necessary. That’s essentially what serverless applications are able to do.
Applications built with a serverless infrastructure will scale automatically as the user base grows or usage increases. If a function needs to be run in multiple instances, the vendor’s servers will start up, run, and end them as they are needed, often using containers (the functions start up more quickly if they have been run recently – see ‘Performance may be affected’ below). As a result, a serverless application will be able to handle an unusually high number of requests just as well as it can process a single request from a single user. A traditionally structured application with a fixed amount of server space can be overwhelmed by a sudden increase in usage.

Quick deployments and updates are possible

Using a serverless infrastructure, there is no need to upload code to servers or do any backend configuration in order to release a working version of an application. Developers can very quickly upload bits of code and release a new product. They can upload code all at once or one function at a time, since the application is not a single monolithic stack but rather a collection of functions provisioned by the vendor.
This also makes it possible to quickly update, patch, fix, or add new features to an application. It is not necessary to make changes to the whole application; instead, developers can update the application one function at a time.

Code can run closer to the end user, decreasing latency

Because the application is not hosted on an origin server, its code can be run from anywhere. It is therefore possible, depending on the vendor used, to run application functions on servers that are close to the end user. This reduces latency because requests from the user no longer have to travel all the way to an origin server.

What are the disadvantages of serverless computing?

Testing and debugging become more challenging

It is difficult to replicate the serverless environment in order to see how code will actually perform once deployed. Debugging is more complicated because developers do not have visibility into backend processes, and because the application is broken up into separate, smaller functions.

Serverless computing introduces new security concerns

When vendors run the entire backend, it may not be possible to fully vet their security, which can especially be a problem for applications that handle personal or sensitive data.
Because companies are not assigned their own discrete physical servers, serverless providers will often be running code from several of their customers on a single server at any given time. This issue of sharing machinery with other parties is known as ‘multi-tenancy’ – think of several companies trying to lease and work in a single office at the same time. Multi-tenancy can affect application performance and, if the multi-tenant servers are not configured properly, could result in data exposure. Multi-tenancy has little to no impact for networks that sandbox functions correctly and have powerful enough infrastructure.

Serverless architectures are not built for long-running processes

This limits the kinds of applications that can cost-effectively run in a serverless architecture. Because serverless providers charge for the amount of time code is running, it may cost more to run an application with long-running processes in a serverless infrastructure compared to a traditional one.

Performance may be affected

Because it’s not constantly running, serverless code may need to ‘boot up’ when it is used. This startup time may degrade performance. However, if a piece of code is used regularly, the serverless provider will keep it ready to be activated – a request for this ready-to-go code is called a ‘warm start.’ A request for code that hasn’t been used in a while is called a ‘cold start.’

Vendor lock-in is a risk

Allowing a vendor to provide all backend services for an application inevitably increases reliance on that vendor. Setting up a serverless architecture with one vendor can make it difficult to switch vendors if necessary, especially since each vendor offers slightly different features and workflows

Who should use a serverless architecture?

Developers who want to decrease their go-to-market time and build lightweight, flexible applications that can be expanded or updated quickly may benefit greatly from serverless computing.
Serverless architectures will reduce costs for applications that see inconsistent usage, with peak periods alternating with times of little to no traffic. For such applications, purchasing a server or a block of servers that are constantly running and always available, even when unused, may be a waste of resources. A serverless setup will respond instantly when needed and will not incur costs when at rest.
Also, developers who want to push some or all of their application functions close to end users for reduced latency will require at least a partially serverless architecture, since doing so necessitates moving some processes out of the origin server.

When should developers avoid using a serverless architecture?

There are cases when it makes more sense, both from a cost perspective and from a system architecture perspective, to use dedicated servers that are either self-managed or offered as a service. For instance, large applications with a fairly constant, predictable workload may require a traditional setup, and in such cases the traditional setup is probably less expensive. Additionally, it may be prohibitively difficult to migrate legacy applications to a new infrastructure with an entirely different architecture.