X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

A History of Technology: From PCs to APIs

We’ve covered the “5 Must-Haves When Planning Your Company’s Tech Modernization.” And we know that by considering these 5 strategies, you will be a step closer to your goal of achieving speed and cost efficiencies that aren’t possible with your current on-prem setup. Now, let’s look at what this means for you.

To gain a full picture for your technology modernization plans, it's good to review where we've come from and where we are today. Let's walk you through a history of technology, demonstrating how each technology modernization came to solve previous sets of problems.

Building desktop applications

The evolution of personal computers led to a massive adoption across businesses and individuals to digitize their manual processes and operations. This led to a proliferation of desktop applications that included user interface, business logic, and data stores (files and databases), all installed on each desktop to meet every kind of business need. This led to files and databases everywhere, leading to massive data duplication, data inconsistency and security issues.  

To overcome these issues, the next wave of technologies was developed to build client-server applications. These apps centralized data and files in secure central locations, aka, database servers and file servers. Users were required to connect the desktop/client applications to the servers to upload/download data and files as needed, based on security credentials. This resulted in database and file storage centralization and security but did not address the issue of having to install the desktop/client application on every user’s computer whenever there was a new release. Older versions became incompatible with the database schema changes or business logic/feature changes. This meant new releases of the desktop/client app needed to be installed on all users' computers.

A man is holding a computer keyboard, showcasing the intersection of technology and history.
A close up of a computer screen displaying a menu, showcasing the history of technology.

Surfing the web

As the internet started gaining popularity, more and more people began browsing static html files from central locations – as we know them, web servers. Tech companies saw this as an opportunity to overcome the desktop/client application limitations of having to install every release on every user’s computer. This was done by developing new technologies to build web applications/sites that would be installed on central web servers and users would browse the web applications/sites using web browsers.

Businesses and third-party software development companies quickly adopted these advancements and new technologies to overcome the limitations of desktop applications and client-server applications and started re-writing them as web applications that could reach broader user base from central web servers.  This led to massive growth in data centers all over the world with more and more servers needed to host rapidly growing web applications. But not every web application was going to be used at a constant workload throughout the year.

Instead, many applications are highly seasonal in their use cases and user workloads. Business applications are used heavily during weekdays business hours, shopping and entertainment web sites are used more during evening and weekends, or used heavily during holiday seasons, and so on. So, businesses had to over-provision the web servers, database servers, integration servers, storage and other infrastructure to meet the peak workloads. 

Many infrastructure teams and application teams started getting bogged down in supporting and maintaining servers, storage, releases and testing of applications during nights and weekends, as opposed to being able to have family/personal life and that resulted in many over-worked, frustrated, burned-out employees."

Supporting more of, well, everything

Application/infrastructure teams had to install, configure and test the software systems on all those extra servers, using many manual and semi-automated processes to make sure they work as expected. This led to a massive amount of excess purchase of servers, storage and other infrastructure components. It meant excess workload on infrastructure teams to configure, prepare and secure the servers. Plus, it meant more work for development teams during nights and weekends to support the release and testing of web applications/sites.  

This also led to a massive increase in backup storage requirements and increased network bandwidth to support regular backups – often on nights and weekends. Infrastructure and application teams became bogged down supporting and maintaining servers, storage, releases and testing applications. The result? Companies dealt with not only excess cost to support the hardware, but also created a poor work environment for over-worked, burned-out employees who were working more, but producing less. This ultimately led to reduced returns on IT investments. 

Agile and DevOps to the rescue

Soon, Agile methodologies and automated DevOps tools became popular. This allowed development teams to work on small sets of features and make frequent releases using automated CI/CD pipelines that would take less time to develop, test and release/deploy. No more spending an entire weekend on a release; however, it did mean teams would come in on more weekends than before, losing some of the gains. 

Two women, engaged in the history of technology, sitting on a couch with laptops.
Two people holding up sticky notes that say done to do, showcasing the efficient use of technology in managing tasks.

Solving for inefficiencies

The next stage was finding a way to solve infrastructure and data center inefficiencies. Large up-front capital expenditures needed to be addressed to optimize the use of compute resources and drastically reduce the manual workload of configuring, maintaining and upgrading servers and other hardware resources. Development teams also needed to accelerate release cycles, so it was important to speed up provisioning and de-provisioning infrastructure resources.  

Server virtualization solved this to a great extent, but there was still a limit to virtualization based on the physical hardware each company owned. Big technology companies saw these drawbacks and the opportunities they presented. They started building their own massive data centers in multiple regions across the world, allowing them to “rent” compute resources to multiple customers in secured, isolated sandboxes. These virtual infrastructures increased the overall utilization rate of compute and storage resources – thereby reducing the per unit cost. This ultimately benefits the businesses and end users who consume these resources.  

The collection of these massive data centers across regions are now called cloud platforms. To provide high-availability and better performance, they enabled capabilities to rapidly configure and replicate data and applications across multiple data centers and regions with simple configurations or scripts. With these advancements, instead of every company purchasing their own servers, storage and other infrastructure components, and the infrastructure teams spending weeks and months to configure and prepare the servers and environments for the applications teams to use, businesses can sign-up with one or more of the cloud platform providers, and run few scripts or navigate through few configuration wizards to quickly provision virtual infrastructure and make the environments available to development teams in a matter of minutes-to-hours.  

Great idea: Only pay for what you use

This kind of infrastructure provisioning and management is called Infrastructure-as-a-Service (IaaS) and Infrastructure-as-Code. To take these capabilities further, cloud providers have built many services – Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) – to automatically provision the necessary compute resources and add or remove additional compute resources based on the workload. This maintains optimal performance so the businesses and end users can focus on using the services efficiently without managing the underlying infrastructure. Plus, they only pay for the actual usage. 

A coffee mug with the words "develon prevent ship" on it, combining both technology and history elements.
A woman engrossed in a laptop at a table.

Simplifying how we build, test and deploy

The majority of on-prem applications built in the last 10 to 15 years are developed with n-tier architecture style that would typically consist of presentation tier, business tier, data access tier and integration tier. These would all get bundled into a single deployment package and get deployed on each computer/server, and this would require more compute resources like CPU and memory to run all the tiers. Depending on the complexity of the logic or the volume of data to be processed, each tier would require different levels of CPU and memory, and the tiers would start competing for compute resources. In such cases, more high-capacity servers needed to be added to handle the workload, instead of being able to run each tier in a separate server. In order to simplify some of these challenges, Service Oriented Architecture (SOA) was introduced, where different tiers would be hosted as web services running on different servers, but they had their own challenges and limitations.

A new breed of REST APIs was created to address these issues and to meet the needs of ever-growing mobile devices and Internet of Things (IoT) devices that need light-weight APIs that can respond in micro-seconds and to enable broader enterprise systems integrations. These REST APIs simplify integration and security, and have much smaller payload using JSON format compared to earlier versions of SOAP/XML based SOA web services.  

These REST APIs also allowed us to look at big monolithic and n-tier systems in a different way. All the tiers can be split vertically by each business/functional domain and exposed as tiny sets of APIs called microservices. These microservices can be consumed by any application/device in a secure way.  

This split-up into smaller set of APIs makes it easy to build, test and deploy into separate servers, virtual machines or as docker containers, and scale them independently from other tiers based on the workload by automatically adding/removing additional compute resources.  

The API and microservice patterns became the foundation of the cloud technology implementation that can efficiently use the cloud compute resources to scale-out and scale-in as needed to keep optimal performance. To simplify the configuration, security and management of APIs, cloud platform providers have built API gateways that allows API admins to configure many traditional web services, REST APIs and cloud native services and expose them through API gateways and secure them with API keys and other advanced options. 


Ready for what's next?

Together, we can help you identify the challenges facing you right now and take the first steps to elevate your cloud environment.