Beliebte Suchanfragen
//

Functions vs. containers – which is better?

24.2.2022 | 11 minutes of reading time

According to the Lünendonk study 2021 “Cloud-Native Software Development” , 64% of the study participants are already partially or completely “cloud-native” in the private or public cloud. Products such as AWS Elastic Container Service (ECS) or Managed Kubernetes offerings are a popular starting point when containers are brought from on-premises or infrastructure-as-a-service (IaaS) platforms into the cloud. The container hype seems to continue to gain momentum for new applications as well.
Serverless and Functions as a Service (FaaS), on the other hand, are something that far fewer developers come into contact with in their day-to-day work. In my experience, the balance is unjustified and FaaS, i.e. higher abstraction level over containers, is usually faster, cheaper and less complicated.

We will look specifically at AWS Lambda and ECS with Fargate (see this blog post on Fargate). Nevertheless, we remain on the level of “Functions vs. containers” so that the comparison remains applicable to other providers, services and container orchestrations (e.g. Kubernetes). To ensure that the comparison does not take place in a vacuum, we imagine a highly available, distributed system with publicly available REST APIs. These APIs could then be accessed by user interfaces and other external consumers.
Using common criteria such as cost, security, complexity, scalability and flexibility, we evaluate the suitability of FaaS/serverless and container applications for such a project.

Pricing models in the cloud

Operating costs of containerised applications are made up of several positions. The baseline is defined by the EC2 machines required: e.g. two “small” machines for two containers. Other costs for ECS Fargate such as a recommended Application Load Balancer (ALB) are usually incurred per service per environment. In addition, there are costs for data transfer and Elastic Container Registry (ECR) for hosting own images.
Visible costs already arise for the first application with one container per month. But one container rarely comes alone. Availability of an application gets acceptable from two containers per service upwards, at least in production these want to be scaled on increased load. Expensive NAT gateways are often added to the bill when it comes to establishing a connection to servers inside a VPN or working with fixed source IPs for consumers. Of course, we do not work with just one production environment, but need the minimum number of containers and their helpers for three environments which results in three times the basic costs.

3x Application load balancer59,20 USD
4x ECS Fargate tasks (ARM architecture, 0,5 CPU & 1 GB RAM)65,42 USD
Total124,62 USD per month

An ECS Fargate container application on three environments does not get any cheaper. There is only chance for higher cost due to scaling, additional applications, container registry, traffic, network requirements (NAT gateways) and more. However, the cost is not a deal breaker to this extent. Few container applications are affordable and okay. Multiple containerised applications per team on many environments that are productively exposed to load and scaling requirements become expensive. This really hurts when you see that the containers’ resources are hardly used – this is observable at night and on weekends and most of the time in non-productive environments. No matter how high the load is, in addition to usage-based resources, the provider has fixed costs for the reserved hardware and puts them onto our bills.

Lambda and other managed services (e.g. DynamoDB, API Gateway, S3, KMS, Secrets Manager, SQS, SNS) in the cloud, on the other hand, are paid for on a pay-per-use or low unit pricing basis. The “free tier” is generous and single-digit amounts mostly just show up on the bill when the software is already being used by real people. If it is not used, the basic costs are vanishingly small. If the functions scale under heavier load, the costs are transparent and rise predictably. If there is an urgent need for optimisation (e.g. a brutal infinite loop between functions), this quickly becomes clear through monitoring and alarms and can be tackled.

4 mil. Lambda requests á 500ms with 512 MB memory (ARM architecture)8,60 USD
2 mil. API Gateway requests7,40 USD
Total16,00 USD per month

Strongly cloud-managed resources like Functions are more transparent units of accounting compared to containers and VMs. They provide detailed insight into how expensive business and technical functions of our application are. A containerised application only provides these information when we instrument and examine it from a business and technical perspective. In addition, every containerised application costs noticeably more money from the first minute onwards. This increases significantly when scaling up and cutting into several applications. Applications that rely on functions and managed services generate their cost much later and only become more expensive than containers in use cases with long processing times and high load. The use case “REST API” usually only requires Functions to run for a few hundred milliseconds per call. For long-running jobs, it becomes expensive and containers become more interesting again. A FaaS-based (“serverless”), event-coupled architecture is therefore many times cheaper in the majority of cases.

Scalability

When running containerised services, we can define the virtual resources of a container in terms of CPU and memory. Autoscaling needs to be configured in terms of when to scale up and down. If the application is architectured to support this and we haven’t built in pitfalls ourselves, it can scale up (and become expensive) to run an incredible number of containers. We always scale the whole service, although often only individual functionalities are the actual bottleneck.

Lambda functions (e.g. one function per REST resource) can be individually scaled from small to huge in order to process requests fast enough. The maximum RAM available (CPU scaled proportionally) is set for each function. The timeout defines how long the function may run. With “reserved concurrency” the permitted amount of parallel executions of a function can be configured. On this basis, functions can be scaled up and down quickly by AWS without bad side effects for operators and users. The costs remain “pay-per-use” and the customer will not notice any higher latencies as long as (as with containers) no pitfalls are built in that stand in the way of scaling.

Theoretically (under best conditions) containers and functions can be scaled similarly quickly and sufficiently for almost all cases. Let’s not forget that you have to get into this situation first.

With functions scaling comes much more cheaply and flexible. Functions benefit more extensively from the experience of cloud providers with load and scaling, and for this we are even rewarded with lower prices. If we have to intervene into scaling manually, we can target the function that is problematic. With containers, we often have to start analyzing again at service level.

Infrastructure complexity and knowledge

No cloud service alone is sufficient to offer our users individual software. For the sake of versatility and flexibility, we always have to combine several services in the cloud. If we want to offer a REST API to users worldwide via the internet, the situation is somewhat different with Functions vs Containers.

In order to offer requests to our API from all over the world via a domain, an Amazon API Gateway with edge-optimised endpoints is recommended for both variants. This allows our API to benefit from the cloud provider’s Content Delivery Network (CDN), which promises our users high transmission speeds with low latency.

In the “serverless way”, we can attach our Lambda Functions directly to the API gateway via an “AWS Proxy Integration”. Each HTTP method offered by a REST resource represents one of these integrations and is linked to the Lambda Function that is to be called for it.

In the “container way”, the API Gateway also needs to know one of these integrations per method on REST resource. However, at ECS with Fargate we have to do (and understand) much more to make the containerised application accessible to the API Gateway. To do this, we need to step into the ring ourselves with some of the more complex resources from EC2: Application Load Balancer, Virtual Private Cloud (VPC), Security Groups and all the configurations and possibilities that come with them. These network-centric resources bind lots of cognitive load from a development team, especially of those who are new in the cloud native field. Mistakes with these can lead to huge security problems, too.
We don’t just create and configure these resources once and then forget about it. We have to keep this knowledge in our team and be able to apply it again when changes are necessary (network requests from the application and to the application, Fargate platform updates, security).

Conclusion: Running ECS Fargate containers in AWS is comfortable compared to container orchestration platforms that require more manual work from our team. However, there are still concrete touchpoints with networking resources that software development teams need to know about and deal with in the long run.
Functions are much easier to manage. The cloud provider takes the largest possible chunk out of the networking headache. This also simplifies the codebase in terms of overview and complexity and significantly reduces the cognitive load of reading and understanding it.

The portability myth

Many companies opt for containers in the cloud or on their own platform because of the know-how advantage and the presumably high portability of containers. The idea: if circumstances change and the infrastructure has to be “re-located”, the containers simply move. What is often ignored, is that moving the containers is not the end of the story. Changes to storage and messaging can hardly be ruled out during a move and will most likely also affect the applications in the containers. Even though containers can run with all cloud providers or under one’s own roof, many wires inside the applications often have to be re-laid for a platform change.
Functions are often seen as purely vendor-specific compute units that can only work with one cloud provider. With the right structuring of functions and their dependencies, a similar basis for a platform move can be created as is possible with containers. FaaS offerings are available from all major cloud providers and there are even possibilities to use on in-house platforms.

Security

Principle of Least Privilege (PoLP)

The idea of this principle is to give users only the rights and access they need to perform their task. In terms of cloud resources, workloads such as Lambda functions and container applications also constitute users. A Lambda function or a Fargate container task always has an IAM role on which permissions are granted.

To comply with the principle, a Lambda function “user-profile”, which reads user information per ID from a DynamoDB table, only receives the IAM permission “dynamodb:GetItem” on the necessary table. If attackers succeeds in remote-controlling the Lambda function for their own purposes (e.g. via the Log4Shell vulnerability or an error in the control flow of the function), the radius of action is enormously restricted. We can thus prevent worse things like the deletion or exfiltration of sensitive data.

A container task running a service, e.g. the “User Service”, has more than one purpose. If it offers user profile management, password and multi-factor authentication, the ECS task role will need at least a handful of IAM permissions. The radius of action for attackers who can break in via a vulnerability of the HTTP API is thus already noticeably larger. All of a sudden, intruders get IAM permissions for the entire functional and technical scope of the service.

Since functions often only fulfil a small task, the Principle of Least Privilege can be strictly applied. An attacker who is successful at first, disappointed by his small radius of action, is more likely to be effectively stopped for the time being.

Patch management

Dependency update tools such as Renovate or Dependabot , combined with good test coverage and automation, become a strong shield against security vulnerabilities in software dependencies through continuous deployment.

As soon as a vulnerability is detected and a fix is made available, the bot tests a build with the patched dependency and then rolls out the change on the test environment. If the deployed environment can still withstand the system tests, the update rolls directly to production. This is what patch management looks like in perfection and theoretically works for functions as well as for containers.

Containers, however, make it a little more difficult. The part of the software that is our responsibility (not that of the cloud provider) through a container image is not always easy to control. If there is an update to our base image that fixes the vulnerable component, that’s good. Otherwise, we have to laboriously update or remove the affected package when building the image (e.g. in the Dockerfile). If we define an exact version, however, it is important to clean it up again later so as not to lose sight of future updates. If tools can’t help us with such sometimes tricky, technical matters, it’s a nice hint that we should gratefully adopt the next form of abstraction: Functions! This way, we only have to worry about the minimum of dependencies ourselves and also throw around fewer tools and scripts.

Functions vs. containers – Conclusion

There are many reasons to go with Functions as a Service in the public cloud. Containers currently face a lot of attention and prestige in the modern IT world and, compared to past decades, also have their justification. However, the roadmaps of the cloud providers speak out in favour of FaaS and managed services in the form of hard, objective facts. These facts are clearest in terms of pricing and advantages in complexity and security.
Functions are the logical evolution of containers as we currently know them from the area of Docker, Kubernetes etc. Ultimately, functions are just containers that are available at the next higher abstraction level and offer the next wave of potential for cloud customers. So don’t be afraid to dive into the world of functions and managed services on your next cloud adventure! Have fun.

share post

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.