Folge 45: Serverless mit Paul Swail [EN]

Klicken Sie auf den unteren Button, um den Inhalt von podcasters.spotify.com zu laden.

Inhalt laden

Our guest today is Paul Swail, an expert on serverless architectures. We discussed the architectural and organisational aspects of serverless:

  • what serverless is
  • why serverless is the future
  • how it provides value to both users and development teams
  • which tasks it’s suited or unsuited for
  • how to get started with serverless

In this episode of „DevOps auf die Ohren und ins Hirn,“ Luca Ingianni interviews serverless consultant Paul Swail, delving into the world of serverless computing. They discuss the advantages of serverless, such as enhanced focus on business value and reduction in infrastructure management, while also addressing common misconceptions and challenges, including debugging, testing, and integration. The conversation covers practical advice for implementing serverless architectures, including transitioning from traditional setups and scaling serverless applications effectively. The episode offers insights into serverless best practices and emerging trends, making it a valuable resource for developers and teams considering or currently using serverless technology.

Inhalt

  • Introduction to Serverless and Guest Paul Swail
  • Benefits of Serverless Computing
  • Challenges in Serverless Architecture
  • Transitioning to Serverless from Traditional Architectures
  • Scaling Serverless Applications
  • Best Practices in Serverless Deployment
  • Debugging and Testing in a Serverless Environment
  • Potential Misconceptions and Pitfalls in Serverless
  • Emerging Trends in Serverless Technology
  • Serverless Resources and Learning Opportunities

Shownotes

Serverless Framework: https://serverless.com

Paul’s Twitter: https://twitter.com/paulswail

Paul’s website and free 5-part email course: https://serverlessfirst.com

Transkript (automatisiert erstellt, wer Fehler findet darf sie behalten)

Welcome to a new episode of DevOps auf die Ohren und ins Hirn, or in English, DevOps from
your ears straight to your brain.
Usually, this is a German-language podcast, but sometimes we make exceptions for international
guests.
Today is one such exception.
My name is Luca Ingianni.
I’m a DevOps consultant, trainer, and coach, trying to get teams to work together better
and create better products for their customers.
And I host this podcast together with my colleague, Dierk Söllner, who unfortunately couldn’t
make it today.
Today, I have the pleasure to introduce my friend, Paul Swail.
Paul, he’s a solo consultant who specializes in serverless.
We’ve had a German-language serverless episode before, episode 26.
I’ll link to it in the show notes.
But while that episode was fairly technical, I wanted to talk to Paul about the higher-level
challenges, the architectural and organizational issues you might face when moving to serverless.
So, hi, Paul.
Thanks a lot for being on our show today.
Hey, Luca.
Thanks a lot for having me.
It’s great to be here.
So, tell us a little bit about yourself, Paul.
Well, as you said, I’m a serverless consultant.
I’m based in Belfast in Ireland.
I’ve been independent for almost 10 years.
For 20 years, I’ve been working in software professionally as an architect and software
engineer, independent for about nine of those years.
And the last three years, I’ve decided to specialize in serverless.
I have been doing some general.
Sort of cloud work, working with Node.js on AWS, sort of container-based.
And then I sort of decided, I’ve sort of seen serverless as sort of the way things are going
generally in software development and decided to specialize in that area about three years
ago now.
So, you said you saw that serverless was the way things were going.
Can you elaborate on that a little bit?
Yeah, I guess it’s sort of related to my whole sort of developer journey or my career.
Everything, I guess as a developer, everything you want to do things faster with a higher
quality, sort of productivity-based.
A lot of the things around serverless is using sort of hosted services and effectively not
doing things that you’ve always thought you had to do, like installing patches on servers,
which is the obvious thing.
And you can just pretty much focus on building apps.
I sort of focus more on the backends nowadays, but I used to do like full-stack front-end
back-end web development and just being able to use services, which just do these things
that you do for every single project.
Just not having to do those is a really big benefit and just being able to hand that off
to a cloud provider.
Yeah.
Who will do that for you, will do part of that for you, just allows you and the rest
of the developers on your team to be able to focus on just the actual business requirements
and just building out what they call the non-undifferentiated heavy lifting.
Yeah, but let’s be honest, serverless still means somebody’s got a server somewhere, right?
So who’s taking care of that server?
Yes, that’s the whole, yeah, it’s a terrible name.
Let’s get that bit out of the way at the start.
It is terrible.
It’s a can of worms when you talk about the name.
Technically, yes, there are absolutely servers there.
It all runs on some sort of Amazon’s or Google’s servers.
So the obvious, like AWS Lambda functions as a service, they run on a server.
The serverless, the word means that you don’t need to think about the server that it’s running on.
It doesn’t mean that it doesn’t exist, but it means that you as the developer or the
architect designing the system don’t need to care.
You can treat a function as your lowest level building block that you can build your applications
and that you don’t really care about, like what operating system or anything like that.
Okay, I see.
Something you just said I found interesting, which is you said,
oh, you don’t really have to worry about servers and patching them and that sort of thing.
And then you sort of qualified that and said, yeah, well, not usually anyway.
What do you mean?
What do you mean by that?
You definitely don’t need to.
It wasn’t qualifying that you don’t need to worry about patching servers.
That is something which has gone away.
If you’re building a fully serverless application.
Now, there are plenty of hybrid type serverless applications where you still would need to do that partly.
But that’s not really what I’m getting at.
And there are still lots of things that won’t go away.
So like your packages.
So if you’re running like Node.js is probably the most popular along with Python,
the most popular Lambda, AWS Lambda languages to use.
Like you still would need to worry about like your NPM packages and keeping them up to date.
So there’s absolutely things which are still things which you wouldn’t consider value add to your business,
to the business problem that you’re working on, which you still need to do.
So it’s not a panacea.
And that solves everything for you.
But one of the key things that I’ve noticed in the teams that I work in, the teams are smaller.
So you need individual developers can take on more ownership of more what you call infrastructure parts of the system.
Whereas in the past, because or in the past with sort of more traditional server based systems,
you really need dedicated people because for those jobs, because they are so,
there is a lot more to them.
So like keeping, getting network configuration all set up and make sure all ports are locked down,
all those sort of concerns, having a front end or a full stack web developer handle all those things can be very risky.
Whereas you don’t, that role goes away if you have a serverless focused development team.
You don’t need that dedicated network sort of systems engineer person on your team.
And you can get away with a smaller sort of set of application based developers and that just bubbles it up to the business level.
The people who are actually the whole point of building the software in the first place is the business value that it’s generating.
And effectively with no engineers don’t like to be seen as a cost.
But the further lower down the stack you are, the more the business would see you as a cost.
Whereas the high F like I always.
The front end developers and the teams that I will work in,
like the designers,
because they get all they’re doing,
they’re getting all the great feedback from from the clients and all that looks great.
That’s exactly the way we look at it.
And that’s like while technically it probably isn’t that difficult what they’ve done,
but it’s the real value has been delivered there to the to the end user,
whereas nobody really cares what servers they don’t care what servers or what Kubernetes stack your,
your,
your,
your apps built on top off as long as it’s up and it’s doing what it’s supposed to be doing and it’s doing that.
Well, that’s that’s where the real value is interesting.
So yeah, you service if I understand this right is really just a way of the entire team being able to focus on customer value.
And and all of the things that are not really a value add gets dealt with by,
you know, the AWS is all the Googles of the world exactly.
Yes.
And yeah, and there are other smaller vendors sort of layering even on top of that.
So AWS have their sort of like the likes of Lambda or DynamoDB sort of like services that you don’t need to do that just sort of you can treat as an API effectively.
And there’s the likes of Netlify and Vercel who are even a layer on top of that,
adding extra developer experience things.
So you like no config or minimal config like automatic CI CD,
just almost out of the box,
lots of sort of layers on top of,
just moving up and up the stack.
But so what I’m what I’m wondering is if we contrasting that with maybe microservices or something which are can we can we say they are more traditional?
I don’t think they’re comparable like you can build you could build a serverless you can build a serverless only microservices architecture.
I don’t think they’re they’re not one or the other.
It’s not either serverless or microservices.
So I could see you could absolutely design it such that every every microservice I hit the term microservice because it implies small or is not necessarily small,
whatever the services are and they could be totally built in serverless only architectures and talk to each other like say via an event bus.
So it would take all the boxes the 12 points for the microservices.
What do you call that now?
The 12 I can’t remember is 12 12 boxes.
You can take you could say this is a microservices architecture and you could totally do that in serverless.
So it’s not one or the other.
But if you’re thinking the sort of more way microservices architectures have been built and either on VMs or on containers and it’s definitely a higher level than that.
So you don’t need to worry about getting a fleet of VMs.
So you don’t need to worry about getting a fleet of VMs.
So you don’t need to worry about getting a fleet of VMs.
Where you run all your containers on top of it.
So you don’t need to worry about all of the deployment issues, really.
I mean, well, you need to deploy your application code, but that’s that’s all there is.
Yeah, you absolutely could.
Like microservices has lots of pros and lots of cons.
So if you think in terms of let’s look at the con, the drawbacks around like the distributed logging, the distributed deployment.
If you could absolutely.
Get all those issues if you build your serverless app in a certain way and some things like the distributed logging is somewhat of an issue.
So you’ve different parts writing to different logs and sort of like there are tools to sort of collate them, but you need to set those up.
But the whole like having a single monolithic deployment of a serverless app, despite like the compute is inherently distributed.
So there’s no getting away from that everything.
Lots of Lambda functions.
But you can deploy them all in a single package effectively in a AWS provides a tool called CloudFormation, which is like an infrastructure is code for effectively deploying a single atomic set of resources in one go.
So it will either deploy it or feel so you can get around like microservices issues of having having to deploy all these things separately by just.
Effectively.
Treating it as a call it distributed monolith, which some people may think that sounds terrible, but the benefit taken the good aspect of the monolithic deployment with the good aspects of the distributed and the scalability of the scalable of the distributed compute that Lambda gives you.
So you get the the simplicity of thinking about a monolithic thing, but you get the infinite scalability.
So you get the simplicity of thinking about a monolithic thing, but you get the infinite scalability of the distributed compute that Lambda gives you.
Exactly, exactly.
Yeah, that’s that’s the way I sort of if you’ve got bigger teams and you could design your serverless app using microservices.
But most of the teams that I’ve worked with, especially on a new on a greenfield project at the start, I always start with like a monolithic deployment.
So everything gets deployed together from the same single repository and deploying everything together.
And you don’t need to reason about.
Just partial deployments and partial rollbacks and that that’s where microservices can get really confusing, especially for a small team and going back to the whole having a team which needs less members, which needs, which can be sort of more application developer focused.
If you start getting into really complex deployment scenarios, then you’re sort of taking away the benefits of going down the serverless route in the first place.
So keeping it simple, just deploying monolithically.
Yeah.
And but still using Lambdas so that Lambda gives you highly scalable compute.
You don’t need to worry about load balancing generally until you’re hitting really large scale.
Okay, wonderful.
You touched on a couple of interesting points there.
Maybe we can spend a couple of minutes looking at that in more detail.
Like, yes, for sure.
Typical the typical ways that I would ask you for, like best practices or something.
But can I ask you for worst practices instead?
Like, what’s how can what are guaranteed ways of messing up your your service application?
Okay, let’s see.
And so I guess one of the big, if you’re coming to Serverless for the first time, your tendency will be to hi.
I get this working on my machine.
And the answer normally is, you don’t.
So you will get part of it, and there are emulators which you could you can use.
But.
for some services, but they’re either subpar or there’s other issues with them.
So I generally say don’t you can run like
the Lambda body of the Lambda code on your own machine.
But anything above that, it’s like you pretty much you need your own cloud
environment, so trying to use get everything running on your own
machine and create AWS emulators, that’s just don’t don’t even try
it because it’s just going to they’re just going to diverge from whatever.
What works on your machine won’t work in the cloud.
That’s you’re just that’s just what’s going to happen.
And so that’s one area and sort of more.
It’s not a it’s not an absolute terrible
thing to do, but it’s sort of it’s becoming a best practice to generally
keep your Lambda function small, preferably single purpose.
So if you’re
say you’re building a REST API
and you’ve got lots of different endpoints and get some posts and for lots
of different resources and you can there are ways you could
effectively just build your whole app inside your whole REST API inside
a single Lambda function, and I’d say that’s a bad idea
for a couple of reasons.
And number one, from a per scalability point of view, if you’ve got one endpoint,
which is a lot, it’s going the way the Lambda will
work and you can lose your concurrency very quickly.
So different functions you can you can throttle them at a certain level.
So if there’s something if there’s something really heavily hit
and you don’t want it to have a knock on effect on the rest of your application,
you can throttle that specific function.
So that’s one area.
But just even from a monitoring point
of view, being able to monitor individual Lambda functions is very powerful.
So you can say have their own logs, you know, OK, this endpoint is getting lots
of errors, that’s quickly fine.
You just know what log file that is because every Lambda function has their own log
file and the likes of just from a code reasoning point of view and
just seeing that specific endpoint, you would be doing it anyway.
If you’re building like, say, ExpressJS Web App, you would probably have your
endpoint built as like a JavaScript function.
And so this just takes that to the next level.
It’s just saying deploying that single code that just that single JavaScript function
to your Lambda and and that has performance benefits because it’s a lot smaller.
The Lambda spins up a lot faster because
it’s just got the code for one function within it, not for the whole application.
OK, so essentially having having a Lambda,
which is like a giant main function with no functions is sounds like a bad idea.
Yeah, I get that. Yeah.
And let’s see what other what other bad practices do we see?
And infrastructure is code or not
using it infrastructure is code.
One of the good things with serverless framework is that
serverless and the serverless framework, which is a specific tool that you can use
for for infrastructure is code and you’re almost forced to define all
your cloud resources within code and you could go into the AWS console and point
and click and create lots of things and deploy things like that.
But you’ll quickly find out that that’s
going to take it’s not it’s not going to be reproducible and it’s going to take
you a long time and it’s going to be very error prone.
So some people still try to do that and they soon get caught out once it comes
to having someone else in their team, if it’s a one person team building a one
person team building a prototype, you could probably get away with it.
But then prototypes always develop into something further.
And then you’re like, OK, we need to define all this in infrastructure as code.
And yeah, there’s a lot of
reverse engineering to try and get what they created down into code.
So I would say start with infrastructure as code from the get go from the very start.
And there are several tools like their
serverless framework, AWS SAM, which do these things for you.
So use them.
OK, so
in a way, it sounds like as long as you use time on a good practice, good software
engineering practice.
You’re going to be OK, right?
And in in certain respects,
yes, there are some evergreen, evergreen
best practices which are always good, no matter what architecture you use
in serverless or whatever.
So keeping things simple, that’s always don’t you aren’t going to need it.
Basic things like that, which like, yeah, always stick to those.
And I guess, yeah, the main paradigm,
paradigm shift, I guess, is is the whole look non local development.
That is something which has been pervasive for developers pretty much
as far as I can remember since since I’ve been a developer anyway.
Like you always had the idea like you
to development, you always get your own development environment working
on your own machine, and that’s generally not the way you would do it.
So that’s I think that’s the biggest difference.
But yeah, other standard
best practices that I mentioned generally.
Yeah, they still hold.
So we’re not we’re not throwing out everything that we’ve learned from the past.
So
does that mean that everything is suited for serverless?
Like, could we just grab any random application and say, yeah,
now we implemented a serverless or are there certain things that are maybe
that just don’t don’t quite work? Yeah.
Well, I would never I’d never.
I’m always like the whole migrating existing apps is a whole issue on its own.
And so that’s there’s lots of nuance,
specifically to each case for for existing apps.
But let’s say you have a new app idea
and you’re saying, should I build this in serverless?
What are the what types of apps or workloads would not be a good fit?
Now, this is a question which is it’s a bit of a moving target.
So if you’d asked me this two years ago, this list would have been longer,
but it’s getting shorter as the cloud providers add features or add new services.
So right now I would say, well,
it’s like a serverless first.
It’s like you if you whatever use cases you have look for a serverless service
which does this and if that doesn’t exist, then use another option.
But what are the reasons why you wouldn’t?
So I would say one of the the current limits is on if you’ve a long running
stateful task like a process which needs to run over a long time
and it uses an existing app which might not have been designed.
For for like a short running
compute like Lambda Lambda has a maximum execution time at the moment of 15 minutes.
So if you have something which needs to run for longer than that, it’s not a good use case.
So an example, I was worked in a client up and they they’ve got an SEO crawler which needs to run.
They’ve got an existing CLI task which crawls website that runs
like it could take four hours for all the different links that it follows.
So that’s yeah, you couldn’t really run that within a Lambda function.
So that’s a use case.
And if you need really low latency from an API, from a rest API,
like you’re talking like single milliseconds and there is a concept of cold
starts, which were a big talking point or a big objection that people using
serverless when it when Lambda functions first came out and they’re much less of an issue.
Cloud providers have mitigated those significantly, but there still is
the very first time you’ve deployed a function, a Lambda function and it gets its first request.
There is a small they call a cold start to load it into memory and to do that.
So that is there’s some like financial transactions.
So it’s like doing computed stock market type apps.
They would be a case where
that wouldn’t be like Lambda based API wouldn’t be a great fit.
And but that list is coming quite like there aren’t many types of new greenfield
applications, which I wouldn’t say don’t start initially with a serverless based
approach.
Yeah, but one other thing, a couple of other types of databases, I would say so
Neo4j graph databases and Elasticsearch are two ones that come
to mind, they don’t have at the moment like what you would call a fully serverless
offering cloud.
So you would need some there are what they would call fully managed hosted
offerings and maybe just explain the difference, what I mean in a minute.
But you still need to pay for you still need to do some upfront capacity
planning for the types of database you would need for those.
And you still
need to
see
the plan for your peak load, not for your average load.
Whereas with serverless you don’t really
need to think about what your peak load is going to be as long as it’s within
the Lambda’s concurrency limit and you’re not paying for peak load.
You’re only paying for exactly what you use.
So there’s sort of an attribute to think
about considering is this service a serverless service?
It’s not easy to say.
Interesting. So, yeah, it was interesting to watch you try and come up with things
that don’t lend themselves to serverless.
It looks like you have to really struggle to to find good examples of.
Yeah, like say about a year ago, I would another item on that list was so if you
had a type of application which was containerized like like a service like
these do I used to get appliances and VMs like a company would put their
their product in a VM and then people would do that with a container as well.
And used to say, OK, no, well, you can’t.
You couldn’t do that in serverless app.
But now there are like Lambda supports container based deployments now as well.
And there’s a there’s a service called AWS Fargate, which is semi sort of
in the hybrid world between containers and serverless where you don’t need to worry
about the servers, but it runs a container on demand.
It spins it up, runs it for the duration
that it needs to run and then it closes down.
And so, yeah, so.
Let’s imagine that I want to get started with serverless.
Just to make it maybe a bit more exciting.
Let’s say I have an existing application and I want to try something new.
How should I do that in a way that doesn’t end in tears?
OK, so, you know, the requirements.
Yeah, I’m going to put aside any
discussions about whether it’s the best use of your time to migrate and exist.
But assume it is not not migrating.
But but, you know, I need some new functionality for this existing app.
So I you know, I think I might as well
instead of creating the microservice or something, I try out this this fancy new
service that’s good because I did this exact thing.
Like I also I have a separate sort of SaaS
product business I’ve been running for about seven, nine years, which was container
based and still going is quite small and but it was built based on containers.
And then three or four years ago I was getting into serverless.
It’s like, OK, a lot of the way I’ve
architected this application would be a better fit for serverless.
And most of my monthly bill would be less as well.
But the actual ongoing maintenance for that I was doing on the servers would go away.
And so what it was a used elastic load balancer, which is a load balancer.
What you can do is I was gradually
and
taking new endpoint, new REST API endpoints that I was adding and rather
than adding them to my existing REST API container based app, I was routing them to
my Lambda functions so you can intercept them
but what they call API gateway,
which is a service, an API service provided by AWS and just route certain requests,
which match a certain path and the new the new API endpoints that I was building.
So rather than routing it to the old
app, it routed it to my new serverless app and then just started building out
the the the new features, the new endpoints within Lambda.
And gradually I never completed the project.
But parts of it and
parts of it is just it’s a way of just extracting.
You’re saying, right, OK, you’re not going to do a big bang crossover.
You can’t because all the risk that that would involve.
You can take piece by piece like endpoint by endpoint.
Where they’re like, like,
within your REST API, say you can just migrate them one at a time over to the new system.
If you’re getting into databases, that gets a bit more complicated or keeping data in sync.
But if it’s purely like the compute part
of your your code, then there are definitely techniques for doing that.
But if you were starting, if you didn’t know anything about serverless at all,
I guess where I usually recommend the people if they if they want to just build
the first deploy the first serverless.
App, say, say REST API.
I keep using that example, but it’s it’s it’s a well-known thing.
Using the serverless framework serverless.com, it has lots of good examples
about quickly defining some endpoints, writing a simple JavaScript,
no JS Lambda function and just getting a response to the cloud
and getting your response back quickly.
OK, so so in essence, what you were describing is just
the classic Strangler pattern.
Exactly. That was exactly the Strangler pattern.
I knew that pattern and I just it wasn’t coming to me.
So I didn’t stumble over it.
Yes, that’s exactly what it is. That’s it.
OK,
wonderful. And that that works well.
And I suppose I suppose that’s really obvious if you think about it.
Just because, you know, serverless, you can write a single
Lambda function, you throw it at the cloud someplace.
And
yeah, and that’s it, isn’t it?
Like there’s nothing else to do, really.
Yeah. And another good use case for getting started.
Sort of like I said, it’s like a low hanging fruit.
If you’re a developer in your organization who are maybe quite skeptical
about serverless and you’re maybe you can see the benefit of it and say, OK,
well, look at all these Cron jobs that we have.
Cron jobs are good, are good, like low hanging fruit where you could say, OK,
Lambda allows you to configure a Cron schedule to run this custom
compute on a on a schedule, either a nightly job to send out email reports
or something like that, whatever it is, having this configure a server just
for that is it has been would be a real pain just having to run those jobs.
Plus you end up doubling up on a server.
You’ve like stuff which is serving front end API and it’s also running back end
jobs in a Cron on the same server and you’re like if it’s a heavyweight
processing job in the background, it could bring down the whole server.
Whereas if you just implement that as a,
as a Lambda function, just run the code in the Lambda function
on a simple configuration schedule and just deploy it.
It just works for you in the cloud.
You don’t need it to
yeah, you don’t need to worry about it
bringing down a server or setting up a server in the first place for it.
Oh, that’s interesting.
That’s that’s such helpful advice, I think, because it’s also such a as you say,
it’s such a nice low hanging fruit because it’s fairly low risk.
Like, yeah, you you’re porting a Cron job or implementing a new Cron job.
Um, as a Lambda function.
And if it doesn’t quite work all that well,
then then maybe you don’t run those emails for a night or something.
Yeah, yeah, yeah.
There’s a good thing of Cron jobs are generally not user facing.
So
are the minimum minimum user in the pack.
So, yeah, they’re a good place to start.
OK, wonderful. So
now I’m all excited about this newfangled serverless thing.
So like how can I
safely grow my my serverless portion of my application?
Like it’s all well and good if it’s like
five Lambdas and I can keep them and I had no problem.
But how do I sustainably build a serverless application?
How do I what do I need to do in order to keep it maintainable,
testable, you know, free from too much technical debt, that sort of thing?
Yeah, yeah.
There’s a few things there and you alluded to it with the number Lambda function.
There’s a criticism which people who haven’t used serverless before often say, OK,
but now I’m just going to end up with a load of Lambda functions to manage.
But you just say you were you from a pure code point of view.
You have the exact same amount of code as
you would have if that was in a monolithic deployment.
They’re just functions, whether it’s a code function on your
file system or a Lambda function in the cloud.
It’s it’s the exact same amount of code from the code point of view.
And there’s a little bit more.
Configuration, I should say, like around you may configure a timeout
for a Lambda function or just the path to the source.
So there’s like a couple of extra lines of YAML, I would say,
but more or less the same amount of code that you would have normally.
And so your considerations then around growing as your as your application grows,
it’s the same as you would for any growing code base.
You still like modularizing it, splitting it, keeping the sensible folder structures.
If it comes to it, consider micro services if your team is starting to
if it’s been worked on by multiple teams possibly and you can sort of come up
with potentially a contract for them to talk to each other at that stage.
You can sort of start splitting your your serverless system into micro services.
And but that’s yeah, that’s going that’s if it’s getting quite big.
And what are they on a monitoring?
So when you’re in production, that’s a there are.
AWS has tools which are OK, I would say for for our operations team for basic stuff.
But if you’ve got a more sort of advanced operational concern,
if you want sort of really fully single, really nice dashboards all looking really
nice, getting all the information in one place, there are third party providers
which sort of aggregate the data that AWS CloudWatch would collect
or it can plug in its own agents into your Lambda code and send
them your data and you get all the nice dashboards and alarms.
And that is something as your application grows and it’s in production that you’ll
need to look into and but AWS CloudWatch, which is their sort of monitoring
logging service out of the box, it does a good enough job, I would say
to get you started anyway before you need to look at third parties.
And I think what else?
Yeah, testability you mentioned.
Yes, so that again.
Last year, in the middle of last year, I ran a survey of a newsletter and
asked several people on LinkedIn to participate.
So I think we’ve got over 150 to 200 responses for people who have been
building serverless apps or have got started and asking the questions really
were around what were the main pain points and the top two.
I mentioned observability and monitoring.
So that was number one.
And number two was testing.
So
testing
I think is an area which again, there’s a slight paradigm shift coming from other
non-serverless based development.
So we start like I mentioned the whole local remote thing and difference.
That is an issue here from the standard.
If you think back, if you know the Martin Fowler or I don’t know who came up
with the original testing pyramid, which has
.
Like a pyramid triangle shape with unit tests at the bottom
and integration tests in the middle and like entry and system tests,
whatever you call them at the top and as it gets up, there are less of them.
So the idea was that the vast majority
of the tests in your system of automated tests would be unit tests.
And then you would have some integration tests and even less end-to-end tests.
That paradigm.
So in my experience and from talking to other serverless experts,
that doesn’t quite hold for
serverless sort of cloud services based applications.
And because the risk now falls to the risk of is less code based,
whereas unit tests are really focused on the code, the actual and procedural codes,
whether it’s Node.js or Java or Python, whatever you’re writing your code in,
that is seen as the biggest risk.
The riskiest area that needs the most testing.
Whereas in serverless apps, it’s not quite that isn’t the case as much.
Now, there you still do write unit tests,
but the integration points between the services become the real
things where the ball can be dropped, where configuration is wrong.
A big thing in AWS land is IAM and IAM.
So it’s
the access control.
Like identity and access management.
Yes, that’s it.
The serverless allows you to define
Lambda functions allow you to find really fine grained permissions on each
function, which is best practice and you do that by default.
But you can very easily, if you did try and run something on your
machine, it would just work because IAM isn’t in play on your own machine.
Whereas you would deploy it into the cloud and it would start,
it would throw an error because it doesn’t have sufficient IAM permissions.
It’s to talk to DynamoDB, let’s say.
And so that’s writing integration tests or
end-to-end tests, even that hit your actual REST API endpoint or your GraphQL
API endpoint to bottom out those issues to make sure
that all the configuration that you’ve written is correct.
So writing integration and end-to-end
tests is much more important when you’re building serverless apps.
Yeah, that’s funny.
It just occurred to me.
If I’m writing a normal Java app,
I don’t need to prove that my function can call another function.
It’s just a given.
And all of a sudden, that’s not quite the case anymore, is it?
Yeah.
When you think of distributed systems,
JavaScript, you can just assume, because they’re very tightly coupled,
they’re like a function, a code, JavaScript, Java function calls another one.
You assume that, but you can’t assume these services in the cloud.
You’re responsible for tying them together.
You’re effectively tying the knot.
Your developers are tying the knot between them and making sure they’re talking
the right language and they have permissions to talk to each other.
That’s all done by infrastructure as code.
So you still need to do right tests to verify that because there are a lot
more integration points between services and cloud resources.
Interesting.
But another thing that worried me,
as somebody who has never tried Serverless before, is how would I be able
to effectively debug something if I throw an input at some function and it maybe
goes through a bunch of different lambdas until it finally reaches its
destination, like a proper integration test.
How can I easily debug where it breaks?
Yeah. So the scenario you talked about there, so multiple lambdas chained together.
So AWS has this service, that sort of orchestration flow, I guess when you’ve
multiple, I guess there’s the choreography and orchestration, which I’m going to,
without getting too deep into the patterns, orchestration is when you have,
you know, like a sequence of actions, integration points between different
services that you want to run in a certain sequence or you may want to
have some extra sort of logic.
But there’s a service called AWS Step Functions, which is,
it’s like a state machine.
You define it in JSON or YAML and different states.
For each state you would say have a task,
which would be implemented as a lambda function.
And it has a nice visual in the console, in the browser.
It has a nice visual
green or red.
So if something fails, it will just pile it in red.
You just click through to the red and it will have the logs for that right beside it.
So that
means for like a multi, for an orchestrated workflow is
very good at helping you debug what went wrong.
And for, I mentioned also what we call
a choreography style sort of integrations where each sort of piece,
say you have like an order management service was the
canonical example where you have a website like Amazon.com where you place an order
and it just publishes an event.
And then you have lots of other
microservices, which then consume that they sort of pull messages,
pull or read the message off the bus and do their own thing like delivery,
shipping or pricing that they just by their whole distributed nature are
generally harder to debug. So if something goes wrong, you need to check.
You’re sort of at the point where you check all those points.
Did the message get published in the first place?
Okay, no. Okay.
Then narrowing it down
from there, whereas, okay, is it a downstream service which just failed?
Then you end up looking the logs for that specific service.
So, yeah, if you’ve got choreographed sort of workflows, they are generally
the hardest to debug once they’re once they’re live.
Okay. So no, no.
Just like stepping through
a piece of your code or something that’s just not no, no, no.
There is.
AWS CloudWatch logs.
So it’s where Lambda and other AWS services write their logs to.
It does have a say you have a transaction
ID, which is the same value through all this.
It passes through all like the order service, your shipping service.
You can filter on, say it’s a UID.
You can do a search within it.
It’s quite slow and limited, but there are other if you were using a sort of richer
than logging like logging aggregation service,
that would probably be an easier way of doing it.
So but if you have what they call
yeah, just that unique transaction ID or correlation ID, which correlates all
the different messages for that single transaction across the different services,
that really helps. Okay.
So I think now I feel quite confident that I can build my first
serverless like application.
Is there something we’ve missed?
You know, given that this this episode is called Practical Serverless.
I think you’ve been pretty thorough.
And yeah, I guess I’ve probably been more back end focused.
So I’m talking a lot about Lambda functions.
That’s more about my recent experience in the last couple of years.
But there are, I guess, going back to the whole mindset
of serverless and encouraging, bringing this thing of bringing development
up the stack and closer to the end user.
So if you’re thinking about empowering,
I sort of like to think eventually not front end developers,
but people who build the front end can build more.
There’s more services coming for for them so that they even some of the lower
even Lambda within some of the features of it will be too low level for
some like the likes of Netlify and Vercel or they have they abstract
a lot about you can run server side code and serverless function,
which under the hood uses Lambda.
But I have like a client who is semi technical, who’s like starting.
I wouldn’t recommend them necessarily jumping straight into writing raw Lambda,
whereas the likes of Vercel or Netlify is a really nice on ramp for someone who is
like building a web app and they need a little bit of server side,
server side functionality to run some code on the server, say when the form is submitted.
And that’s a really nice place to start.
Interesting.
I feel like this is an excellent place to to wrap up, but I feel like I really
should mention that the great workshops that’s that Paul does.
I’ve been I took one of his serverless testing workshops.
When was it last?
Last week? Yeah, it was.
Yeah, it was November. It was November.
We did that. Yeah, yeah, yeah, exactly.
I think it was it was really, really great.
So
just from from personal experience,
if you need so less stuff, go talk to Paul just because not only because he
knows what he’s doing, but also because he’s really great at teaching it.
If people wanted to talk to you, Paul, where could they find you?
Yeah, sure.
And so my website is serverlessfirst.com.
And so if you go there on the home page, I have a link to
like a five part email course for helping people sort of if you’re interested in
getting up to up to speed on serverless and or moving your team over to it
potentially or finding the low hanging fruit that we that we mentioned in the episode.
And I cover those things in that email course.
So that’s serverlessfirst.com.
And you can also get me on Twitter at Paul Swail, S-W-A-I-L.
And yeah, and my other like I do some consulting and my workshops that Luca mentioned.
They’re all on that website as well.
When’s the next workshop going to be, by the way?
And at the minute, I think we’re talking
late spring, possibly possibly May this year.
So, yeah, I’m planning it next week.
So hopefully
and I’m hoping to make it’s going to be a workshop.
But I think I’m also going to have a course option.
So and so you’ll be able to take it on your totally on your own time.
It’s self-paced anyway with with meetups, but there will also be a fully self-paced
option at a lower price.
If anybody’s interested.
Oh, nice. OK.
Paul, thank you so far, so much for being here.
This was a really interesting episode.
Yep, it was great. Thanks very much, Luca, for having me.
Great fun chatting.
Thanks a lot. See you next time, Paul.
Cheers, man. Bye bye.
Bye bye.