Beliebte Suchanfragen
//

CloudWatch on AWS: How to tackle high-security requirements

23.8.2022 | 15 minutes of reading time

If you build cloud-native applications, you will also generate log output. Log outputs are essential to log the functionality of the application and to be able to localize errors very quickly in the event of a crash. However, log outputs of any kind allow potential intruders to draw conclusions about the structure and security measures of an application. AWS provides several security concepts to help keep information stored in AWS CloudWatch safe.

If there is more than one application that uses AWS CloudWatch (see website ) and it is desirable to collect all logs at a central place, one common practice is to collect all logs together in one S3 bucket and use a retention period for LogGroups. After the retention period is over, LogGroups are cleared and S3 will be the source of forensic research in logs. Athena will supply you with a SQL-like query language to find defined entries in any logs out of S3. S3 buckets can be encrypted anyway, so they offer a secure and cheap space to keep the logs. There are many other convenient possibilities to visualize your logs in and outside AWS.

This blog post will show you both solutions, encryption with AWS KMS and exporting CloudWatch LogGroups to S3 written in TypeScript.

You can use AWS Key Management Service (KMS) to encrypt LogGroups in CloudWatch. KMS offers cryptographic services to create your own “customer-managed keys” to encrypt log groups, for example. KMS offers a high level of security because the master key used for encryption never leaves this service. Every time the key is rotated, you get a new MasterKey. The use of the respective key can be precisely regulated by policy. This means that certain key services can be activated or denied for each department via role assignment.
It makes sense to use an alias for KMS keys to make them more manageable for humans. It is also essential to use aliases when keys are to be rotated. An alias makes it easier to identify the cryptic ID of a key. Most cloud-native applications mainly rely on Lambda functions. These automatically create a LogGroup in AWS with the same name as itself.

Serverless Framework

If you are using ​​TypeScript, JavaScript and Node.js in your project, the Serverless Framework (see web page ) is a good choice to simplify complex tasks when building Lambda-based applications in AWS. It is a framework that can define AWS Lambda functions and their triggers in a simple YAML syntax. You can easily extend it with plugins and even use self-written plugins. There are solutions available for many specific problems in the npm registry . “Infrastructure as YAML” makes development easy and secure.

Serverless transforms the YAML structure into CloudFormation templates and can therefore be extended with your own CloudFormation snippets if the framework or plugin does not do exactly what is needed. Serverless has many plugins available for offline development by means of which a DynamoDB can be simulated locally, for example.

But there is no suitable out-of-the-box solution for the task of encrypting CloudWatch logs while rotating keys at a higher frequency than one year. Therefore the extension with CloudFormation Templates comes in handy.

Serverless YAML

If you use Serverless, you need to have the framework in your dependencies in your package.json and you need a serverless.yaml file in the root directory of your project:


service: users

functions: # Your "Functions"
  usersCreate:
    events: # The "Events" that trigger this function
      - httpApi: 'POST /users/create'
  usersDelete:
    events:
      - httpApi: 'DELETE /users/delete'

resources: # The "Resources" your "Functions" use. Raw AWS CloudFormation goes in here.

The lambdas for these two endpoints each create a LogGroup when created and write all console logs to LogGroups. Our goal now is to encrypt these logs.

This simple example from the Serverless documentation shows roughly how the YAML is structured and where the extension for encryption has to go. These templates can be entered directly in serverless.yaml or reside in files, which I would recommend for better structuring. Below “resources:” there is:


resources:
   - ${file(myconfig/cloudwatch/usersCreate.yml)}
   - ${file(myconfig/cloudwatch/usersDelete.yml)}

Here is the raw structure inside the template:

  • Generate a KMS key with an appropriate policy
  • Create an alias for the KMS key
  • Associate the AWS-generated LogGroup with the alias
  • Set retention and tags for follow-up actions

This is what the userCreate.yaml to encrypt the appropriate LogGroup looks like:


Resources:
  KMSKeyUserCreate:
    Type: AWS::KMS::Key
    Properties:
      Description: 'key for cloudwatch encryption of lambda assets'
      Enabled: true
      Tags:
        - Key: rotate
          Value: TRUE
      KeyPolicy:
        Id: key-for-loggroup
        Version: '2012-10-17'
        Statement:
          - Sid: Enable IAM User Permissions
            Effect: Allow
            Principal:
              AWS: !Sub arn:aws:iam:::root
            Action: kms:*
            Resource: '*'
          - Sid: Enable CWL Permissions
            Effect: Allow
            Principal:
              Service: !Sub logs..amazonaws.com
            Action:
              - kms:Encrypt*
              - kms:Decrypt*
              - kms:ReEncrypt*
              - kms:GenerateDataKey*
              - kms:Describe*
              - kms:UpdateAlias
            Resource: '*'
            Condition:
              ArnEquals:
                kms:EncryptionContext:aws:logs:arn: !Sub ‚arn:aws:logs:::log-group:/aws/lambda/--userCreate‘

  KMSKeyUserCreateAlias:
    Type: AWS::KMS::Alias
    Properties:
      AliasName: alias/UserCreateLogGroup
      TargetKeyId: !Ref KMSKeyUserCreate

  UserCreateLogGroup: # the logical ID automatically generated for the lambda
    Type: AWS::Logs::LogGroup
    DependsOn: KMSKeyUserCreateAlias
    Properties:
      RetentionInDays: 14 # or how long the retention period should be
      KmsKeyId: arn:aws:kms:::alias/UserCreateLogGroup
      Tags:
        - Key: exportToS3
          Value: TRUE

With this setup, access to the encrypted logs is only possible for the root account. In reality, a normal role of a department that is responsible for this should be entered here. The key has been tagged ‘rotate’ so we can easily find it later and the LogGroup has been tagged ‘exportToS3’ for the same reason.
Serverless will build logical IDs using a naming pattern. So it is easy to figure out that “UserCreateLogGroup” is the name identifying the LogGroup to the Lambda, making it easy to supply RetentionInDays and the KmsKeyId. You can also use the alias instead of the ID.

Note: AWS may not wait long enough for the alias to be generated and might report that the key is inappropriate for the LogGroup. In this case, the KmsKeyId of the LogGroup must be replaced with the logicalID of the key from line 2 for the first deployment. The alias can then be used for the next deployment. For the key rotation it is important that the LogGroup depends on the alias and not on the key because the key itself is discarded later during rotation.

The shown code is a template for all CloudWatch encryptions, so I won’t repeat it for all lambdas.

KMS provides a key rotation scheme on its own. However, it is only provided with a fixed rotation period of 365 days. If other periods are required, you need to implement rotation by yourself using a lambda.

The Lambda for the rotation looks like this (keyrotation.ts, the aws-sdk is to be included in package.json):


import * as AWS from 'aws-sdk';
import { KMS } from 'aws-sdk';
import { CreateKeyRequest, KeyList, KeyListEntry, TagList, UntagResourceRequest } from 'aws-sdk/clients/kms';

const KeyClient = new AWS.KMS({ apiVersion: '2014-11-01' });
const KEYS_ROTATION_MARK ='rotate';
const KEYS_MAX_LIFETIME_DAYS = '90';
const TAGKEY_MAX_LIFETIME_DAYS = 'maxLifetimeDays';
const TAGKEY_ROTATED_AT = 'rotatedAt';

/**
 * get a tag's value for a given key or undefined
 * @param tags
 * @param key
 */
export const getTag = (tags: TagList | undefined, key: string): string | undefined => {
    return tags?.find(tag => tag.TagKey === key)?.TagValue;
};

/**
 * rotate a single key. the old key gets a tag rotatedAt with 
 * @param key
 * @param keyMetadata
 * @param tags
 * @param kmsClient
 */
export const rotateKey = async (key: KeyListEntry, keyMetadata: KMS.KeyMetadata, kmsClient: AWS.KMS) => {
    // collect aliases the key to be rotated belongs
    try {
        const aliases = await kmsClient
            .listAliases({
                KeyId: key.KeyId || ''
            })
            .promise();
        if (!aliases || !aliases.Aliases || aliases.Aliases.length === 0) {
            // all our keys have aliases - this one cannot be handled
            console.log('key had no aliases');
            return;
        }
        // get policy of old key to copy to new key
        const policyResponse = await kmsClient
            .getKeyPolicy({
                KeyId: key.KeyId || '',
                PolicyName: 'default'
            })
            .promise();

        const newTags: TagList = [
            { TagKey: TAGKEY_MAX_LIFETIME_DAYS, TagValue: KEYS_MAX_LIFETIME_DAYS },
            { TagKey: TAGKEY_ROTATED_AT, TagValue: new Date().toDateString() },
            { TagKey: KEYS_ROTATION_MARK, TagValue: 'true' }
        ];
        const keyParam: CreateKeyRequest = {
            Description: keyMetadata.Description,
            Policy: policyResponse.Policy,
            Tags: newTags
        };
        // create new key
        const newKey = await kmsClient.createKey(keyParam).promise();
        if (!newKey || !newKey.KeyMetadata) {
            console.log('create a new key failed');
            return;
        }
        // update alias with the new key - could be more than one
        for (const alias of aliases.Aliases) {
            await kmsClient
                .updateAlias({
                    TargetKeyId: newKey.KeyMetadata.KeyId || '',
                    AliasName: alias.AliasName || ''
                })
                .promise();
        }
        // untag the old keys from rotation
        const untagResourceParam: UntagResourceRequest = {
            KeyId: key.KeyId || '',
            TagKeys: [KEYS_ROTATION_MARK]
        };
        await kmsClient.untagResource(untagResourceParam).promise();
    } catch (e) {
        console.log('error:', (e as Error).message);
    }
};

/**
 * processes a single key and performs rotation if necessary
 * @param key
 * @param keyMetadata
 * @param tags
 * @param kmsClient
 */
export const processKey = async (
    key: KMS.KeyListEntry,
    keyMetadata: KMS.KeyMetadata,
    tags: TagList,
    kmsClient: AWS.KMS
) => {
    const rotatedAt = getTag(tags, TAGKEY_ROTATED_AT);
    const maxLifetimeDays = getTag(tags, TAGKEY_MAX_LIFETIME_DAYS);
    let mustRotate = true;
    if (rotatedAt && maxLifetimeDays) {
        const once = new Date(rotatedAt);
        once.setDate(once.getDate() + parseInt(maxLifetimeDays));
        if (once > new Date(new Date().toDateString())) {
            mustRotate = false;
        }
    }
    if (mustRotate) {
        /* either lifetime is over or it was never rotated before */
        console.log('must rotate key:', key.KeyId);
        await rotateKey(key, keyMetadata, kmsClient);
    }
};

/**
 * loops thru all enabled custom managed keys tagged by 'rotate' and invokes subsequent steps
 * @param kmsClient
 */
export const processKeys = async (kmsClient: AWS.KMS) => {
    let marker = undefined;
    let keyListResponse;
    const keyList: KeyList = [];
    // there are so many keys around, that we must fetch the list in 100 entries-chunks (default)
    do {
        keyListResponse = await kmsClient.listKeys({ Marker: marker }).promise();
        if (keyListResponse && keyListResponse.Keys) {
            keyList.push(...keyListResponse.Keys);
        }
        marker = keyListResponse.NextMarker;
    } while (keyListResponse.Truncated);
    for (const key of keyList) {
        const keyDetails = await kmsClient
            .describeKey({
                KeyId: key.KeyId || ''
            })
            .promise();
        // we must first check for customer key else we will have permission problems listing resource tags
        if (keyDetails.KeyMetadata?.KeyState === 'Enabled' && keyDetails.KeyMetadata?.KeyManager === 'CUSTOMER') {
            const tags = await kmsClient
                .listResourceTags({
                    KeyId: key.KeyId || ''
                })
                .promise();
            if (getTag(tags.Tags, KEYS_ROTATION_MARK)) {
                await processKey(key, keyDetails.KeyMetadata, tags.Tags || [], kmsClient);
            }
        }
    }
};

/**
 * the lambda handler for kms key rotation, triggered by a scheduler
 * @param KeyClient
 */
export const handler = async () => {
    await processKeys(KeyClient);
};

module.exports.handler = handler;

There are two challenges when using this Lambda function:

  • We need the appropriate rights for the lambda to perform its task.
  • We’ll use a KMS key for each rotation, which will be reflected in the costs at AWS in the long run.

Important: Don’t even think about deleting KMS Keys as long as they are not archived in S3. For all logs that have been encrypted and that still need to be kept, the key used for encryption at the time is still required. Otherwise the log will remain unreadable forever.

The corresponding lambda permissions look like this (keyrotation-lambda-role.yaml):


- Effect: Allow
  Action:
      - kms:CreateKey
      - kms:ListAliases
      - kms:ListKeys
      - kms:DescribeKey
      - kms:ListResourceTags
      - kms:TagResource
      - kms:UntagResource
      - kms:GetKeyPolicy
      - kms:PutKeyPolicy
      - kms:UpdateAlias
  Resource: '*'
  Condition:
    StringEquals:
      kms:CallerAccount: 
      aws:ResourceTag/ExportToS3: "TRUE"

Then you have to adjust the serverless.yaml entry for the new function like this:


service: users
 
functions:
  usersCreate:
    events:
      - httpApi: 'POST /users/create'
  usersDelete:
    events:
      - httpApi: 'DELETE /users/delete’
  keyRotation:
    handler: keyrotation.handler. # der Name der Datei mit dem Suffix „handler“
    timeout: 290 # in case rotation may take a little longer
    events:
        - schedule: rate(1 day)
    iamRoleStatements: ${file(myconfig/iam-roles/keyrotation-lambda-role.yaml)}

 resources:
   - ${file(myconfig/cloudwatch/usersCreate.yaml)}
   - ${file(myconfig/cloudwatch/usersDelete.yaml)}
   - ${file(myconfig/cloudwatch/keyRotation.yaml)}

A LogGroup is generated for the Lambda KeyRotation, which in turn must be encrypted. The same applies to each additional lambda that is added. Do not forget to grant the appropriate permissions.
The requirements for encryption, rotation and retention are thus met. Exporting the LogGroup to an S3 bucket looks like the following:

For this we use the following Lambda (export-log-s3.ts):


import * as AWS from 'aws-sdk';

// it all is not working w/o region in init
const regionParam = {
    region: 
};
// get clients for cloudwatch-logs, simple systems manager and s3
const logs = new AWS.CloudWatchLogs(regionParam);
const ssm = new AWS.SSM(regionParam);

module.exports.handler = () => {
    exportLogGroupsToS3(logs);
};

/**
 * export all log groups tagged with ExportToS3 to a dedicated bucket. Must give aws client instances in order to be able to mock them.
 * @param s3Instance s3 client instance
 */
const exportTag ='exportToS3';
const bucketName ='cloudwatch-logs';
const exportToTimeInMs = 1440 * 60 * 1000; # every day we export

export const exportLogGroupsToS3 = (logInstance: AWS.CloudWatchLogs) => {
    return new Promise(resolve => {
        logInstance
            .describeLogGroups()
            .promise()
            .then(describeLogGroupsResponse => {
                if (!describeLogGroupsResponse.logGroups) {
                    console.log('no LogGroups could be found');
                    resolve();
                    return;
                }
                describeLogGroupsResponse.logGroups.forEach(logGroup => {
                    const logGroupName = logGroup.logGroupName || '';
                    logInstance
                        .listTagsLogGroup({ logGroupName: logGroupName })
                        .promise()
                        .then(listTagsLogGroupResponse => {
                            // export only tagged logGroups
                            if (listTagsLogGroupResponse.tags && listTagsLogGroupResponse.tags[exportTag]) {
                                exportLogGroupToS3(ssm, logs, logGroupName, bucketName);
                                resolve();
                            }
                        });
                });
            });
    });
};

const restoreLastExportPointInTime = (ssmInstance: AWS.SSM, ssmParameterName: string): Promise => {
    return new Promise(resolve => {
        console.log('restoreLastExport:', ssmParameterName);
        ssmInstance
            .getParameter({ Name: ssmParameterName })
            .promise()
            .then(response => {
                resolve(response.Parameter?.Value || '0');
            })
            .catch(() => {
                // the first time will go here
                console.log('parameter was not set yet:', ssmParameterName);
                resolve('0');
            });
    });
};

const saveLastExportPointInTime = (
    ssmInstance: AWS.SSM,
    ssmParameterName: string,
    parameter: string
): Promise => {
    return new Promise(resolve => {
        console.log('saveLastExport:', ssmParameterName);
        const putParams = {
            Name: ssmParameterName,
            Type: 'String',
            Value: parameter,
            Overwrite: true
        };
        ssmInstance
            .putParameter(putParams)
            .promise()
            .then(response => {
                console.log('putParameter response:', response);
                resolve();
            })
            .catch(e => {
                console.log('putParameter failed:', e.message);
                resolve();
            });
    });
};

const exportLogGroupToBucket = (
    logInstance: AWS.CloudWatchLogs,
    logGroupName: string,
    prefix: string,
    exportFromTime: number,
    exportToTime: number,
    bucketName: string
): Promise => {
    return new Promise((resolve, reject) => {
        const exportParams = {
            logGroupName: logGroupName,
            from: exportFromTime,
            to: exportToTime,
            destination: bucketName,
            destinationPrefix: prefix
        };
        logInstance
            .createExportTask(exportParams)
            .promise()
            .then(response => {
                console.log('createExportTaskResponse:', response);
                resolve();
            })
            .catch(e => {
                console.log('createExportTask failed:', e.message);
                reject();
            });
    });
};

/**
 * export a single log group to a dedicated bucket. Must give aws client instances in order to be able to mock them.
 * @param ssmInstance s3 client instance
 * @param logInstance cloudwatchlog client instance
 * @param logGroupName name of the log group to be exported
 * @param bucketName name of the bucket to export to
 */
export const exportLogGroupToS3 = (
    ssmInstance: AWS.SSM,
    logInstance: AWS.CloudWatchLogs,
    logGroupName: string,
    bucketName: string
) => {
    const ssmParameterName = 'log-exporter' + logGroupName.replace(new RegExp('/', 'g'), '-');
    // get old export time as new start
    restoreLastExportPointInTime(ssmInstance, ssmParameterName).then(exportFromTime => {
        const timeToNowInMs = new Date().getTime();
        // check if time is up for next export
        if (timeToNowInMs - parseInt(exportFromTime) < exportToTimeInMs) {
            console.log('time is not up');
            return;
        }
        console.log('scheduling export task');
        exportLogGroupToBucket(
            logInstance,
            logGroupName,
            logGroupName.replace('/', ''),
            parseInt(exportFromTime),
            timeToNowInMs,
            bucketName
        )
            .then(() => {
                saveLastExportPointInTime(ssmInstance, ssmParameterName, timeToNowInMs.toString()).then(() => {
                    console.log('finished');
                });
            })
            .catch(() => {
                console.log('trying later to export ', logGroupName);
            });
    });
};

We store the last date of export in the parameter store of the SSM (Simple Systems Manager) and retrieve it so that we don’t export anything twice. You could also instantiate the AWS client instances in the methods, but if you want to write test classes with aws-sdk-mock, you have to be able to provide mock instances from the test. Hence the instances being passed on as parameters.

Here, too, our Lambda needs the appropriate permissions for its task. They look like this (export-log-s3-lambda-role.yaml):


- Effect: Allow
  Action:
    - logs:describeLogGroups
    - logs:listTagsLogGroup
  Resource: „arn:aws:logs:::log-group:*" #  you cannot avoid * here as ALL log-groups are described and tags are analyzed. Else an exception will be thrown and nothing will be exported
- Effect: Allow
  Action:
    - logs:createExportTask
  Resource: # no asterix user here as it is bad security style 
    - „arn:aws:logs:::log-group:log-group:/aws/lambda/--userCreate“
    - „arn:aws:logs:::log-group:log-group:/aws/lambda/--userDelete"
    - „arn:aws:logs:::log-group:log-group:/aws/lambda/--logGroupToS3"
    - „arn:aws:logs:::log-group:log-group:/aws/lambda/--keyRotation"
- Effect: Allow
  Action:
    - ssm:PutParameter
    - ssm:GetParameter
  Resource:
    - „arn:aws:ssm:::parameter/log-exporter-aws-lambda---createUser"
    - „arn:aws:ssm:::parameter/log-exporter-aws-lambda---deleteUser"
    - „arn:aws:ssm:::parameter/log-exporter-aws-lambda---logGroupToS3"
    - „arn:aws:ssm:::parameter/log-exporter-aws-lambda---keyRotation"

The repetition rate for the LogGroup exporter is one hour, whereby internally it is ensured that the export is only executed once per day. If there is a large number of LogGroups to be exported, AWS may not export all LogGroups at once due to resource limits. Since we remember which ones have already been exported and when, non-exported LogGroups can be made up for in the next hour. Therefore the schedule is not set to one day.

If you need the right CloudFormation template for the S3 bucket, here it is:


Resources:
  LogGroupBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub '${self:custom.logGroupBucketName}'
      VersioningConfiguration:
        Status: Enabled
      OwnershipControls:
        Rules:
          - ObjectOwnership: BucketOwnerEnforced
      BucketEncryption: 
        ServerSideEncryptionConfiguration: 
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
  LogGroupBucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref LogGroupBucket
      PolicyDocument:
        Statement:
          - Effect: Allow
            Action:
              - s3:GetBucketAcl
            Principal:
              Service: !Join
                - '.'
                - - 'logs'
                  - !Ref 'AWS::Region'
                  - amazonaws.com
            Resource: !GetAtt LogGroupBucket.Arn
          - Effect: Allow
            Action:
              - s3:PutObject
            Principal:
              Service: !Join
                - '.'
                - - 'logs'
                  - !Ref 'AWS::Region'
                  - amazonaws.com
            Resource: !Join
              - ''
              - - !GetAtt LogGroupBucket.Arn
                - '/*'
            Condition:
              StringEquals:
                s3:x-amz-acl: bucket-owner-full-control
          - Effect: Allow
            Principal:
              Service: logging.s3.amazonaws.com
            Action:
              - s3:PutObject
            Resource:
              - arn:aws:s3:::${self:custom.logGroupBucketName}/s3/${self:custom.clientBucketName}*
            Condition:
              ArnLike:
                aws:SourceArn: arn:aws:s3:::${self:custom.clientBucketName}
              StringEquals:
                aws:SourceAccount: !Ref AWS::AccountId
          - Effect: Deny
            Action: '*'
            Resource:
                - !GetAtt LogGroupBucket.Arn
                - !Join [ '', [ !GetAtt LogGroupBucket.Arn, '/*' ]]
                - arn:aws:s3:::${self:custom.logGroupBucketName}/s3/${self:custom.clientBucketName}*
            Principal: '*'
            Condition:
                Bool:
                    'aws:SecureTransport': false

Then our serverless.yaml should look like this:


service: users
 
functions:
  usersCreate:
    events:
      - httpApi: 'POST /users/create'
  usersDelete:
    events:
      - httpApi: 'DELETE /users/delete’
  keyRotation:
    handler: keyrotation.handler
    timeout: 290 # in case rotation may take a little longer
    events:
       - schedule: rate(1 day)
    iamRoleStatements: ${file(myconfig/iam-roles/keyrotation-lambda-role.yaml)}

  logGroupToS3:
    handler: export-log-s3.handler
    timeout: 700 # Set 700 seconds for the case where the export may take a little longer
    events:
       - schedule: rate(1 hour)
    iamRoleStatements: ${file(myconfig/iam-roles/export-log-s3-lambda-role.yml)}

 resources:
   - ${file(myconfig/cloudwatch/usersCreate.yaml)}
   - ${file(myconfig/cloudwatch/usersDelete.yaml)}
   - ${file(myconfig/cloudwatch/keyRotation.yaml)}
   - ${file(myconfig/cloudwatch/logGroupToS3.yaml)}
   - ${file(serverless-config/s3/compliance-log-bucket.yml)}

Conclusion

Securing AWS is not always an easy job to do. In this blog post, I showed how to solve encryption with key rotation at customizable frequency and export of CloudWatch logs to a centralized S3 bucket for long lasting and encrypted storage. The Serverless Framework can help you with this task. It is a free, open-source framework written in Node.js. This blog post shows how to leverage the extensibility of the framework to supply enhanced security to CloudWatch logs.

share post

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.