Kubernetes Response Engine, Part 5: Falcosidekick + Argo
This blog post is part of a series of articles about how to create a
Kubernetes
response engine withFalco
,Falcosidekick
and aFaaS
.See other posts:
- Kubernetes Response Engine, Part 1 : Falcosidekick + Kubeless
- Kubernetes Response Engine, Part 2 : Falcosidekick + OpenFaas
- Kubernetes Response Engine, Part 3 : Falcosidekick + Knative
- Kubernetes Response Engine, Part 4 : Falcosidekick + Tekton
- Kubernetes Response Engine, Part 6 : Falcosidekick + Cloud Run
- Kubernetes Response Engine, Part 7: Falcosidekick + Cloud Functions
The Open Source ecosystem is very vibrant, there are many ways to create a Kubernetes Response Engine based on our dynamic duo, Falco
+ Falcosidekick
.
Today, we will use two components of the CNCF project Argo
:
Argo Events
, will receive events fromFalcosidekick
and push into it event bus.Argo Workflow
, will listen the event bus and then trigger the workflow if certain criteria are encountered.
Like we did for previous examples with Kubeless
, OpenFaas
and Knative
, we'll address the situation where a shell is spawned in a pod and we want to remediate that by deleting it.
This is how we will set this up:
Requirements
We require a kubernetes
cluster running at least 1.17
release, helm
and kubectl
installed in your locale environment.
Installation of Argo Events
We simply follow the official documentation.
Installation of Argo Workflow
Again, the official documentation will help us.
The kubectl patch
is there for allowing the workflows to run in minikube
, kind
, etc. See docs about Workflow Executors to learn more about.
After a while, you should have access to Argo Workflow
UI through a dport-forward:
The link is https://localhost:2746 (you can ignore the certificate error, we're in a lab 😉).
Creation of the Event Source
We'll use an Event Source
with Webhook
type. It will receive Falco
events from Falcosidekick
and push them then into the Event Bus.
This component is pretty easy to understand. Falcosidekick
will have to POST the events to an endpoint /falco of a service opened on port 12000. Easy.
As expected, we now have a new service which will listen events from Falcosidekick
on port 12000 and endpoint /falco:
Creation of the Sensor
In Argo Events
architecture, Sensors
are responsible for listening to the Event Bus and triggering something should the criteria we set match.
In our case, our Sensor
:
- listen only events for pushed by webhook-falco
Event Source
- consider only events where the body (in JSON) contains the value Terminal shell in container for field with key rule, we want to match for only this Falco rule in one word.
- trigger a workflow based on a template with our event as input
First, create the Service Account which allows our Sensor
will.
And now we deploy our Sensor
.
Creation of the Workflow Template
There is one piece missing in our Argo
stack, we mentioned a template above, we logically need to create it too, with the service account it needs.
Argo Workflow
runs all workflow steps inside their own pods, we'll use for this tutorial a Golang image developped by @developer-guy (who wrote the Part 2 of this series 😄), the sources are there.
At this stage, everything is ready to receive events from Falco
and protect our cluster.
If you go in Argo Workflow
UI you will find the architecture we described at beginning.
Installation of Falco and Falcosidekick
Last but not least, it's time to install our beloved Falco
and Falcosidekick
and connect them to our shiny new Response Engine.
As with other posts of this series we'll use Helm
as conveniant installation method.
Remember the service we "mentioned" earlier? This is it in its FQDN format as an endpoint.
Test our Response Engine
Let's delete pwned pod !
We'll simulate a webshell by executing a shell command into a running pod.
Run a shell command inside.
If you're quick enough, you may see the termination of the pod.
And in Argo Workflow
UI.
👍
Go a little further with Argo
We can even go further by deploying all components with Argo CD
, another project from Argo
team.
You can find out all you need in this repo.
Here a quick demo of the results with the exact same workflow we just created in this tutorial.
Conclusion
We got another way to create a Response Engine with amazing pieces of software from Open Source world. Of course, it's just the beginning, feel free to share your functions and workflows with the community for starting the creation of a true library of remediation methods.
If you would like to find out more about Falco:
- Get started in Falco.org.
- Check out the Falco project in GitHub.
- Get involved in the Falco community.
- Meet the maintainers on the Falco Slack.
- Follow @falco_org on Twitter.