IT Solutions Provider

Categories
Blogs

Seamless Event-Driven Architecture with AWS Event Bridge

Seamless Event-Driven Architecture with AWS Event Bridge.

What is AWS Event Bridge?

AWS Event Bridge is a serverless event bus service provided by Amazon Web Services (AWS) that allows you to route events between AWS services, your applications, and third-party SaaS applications. It provides a central location for your applications to publish and receive events, making it easier to build event-driven architectures.

It can react to state changes in resources including AWS and non-AWS resources.

How does AWS Event Bridge work?

AWS Event Bridge works by routing events between different AWS services, applications, and third-party SaaS applications. The event bus is the central component of Event Bridge, which provides a way to route events from different sources to different targets.

An event source is a service or application that generates events, and an event target is a service or application that receives events. You can set up rules in Event Bridge to route events from an event source to one or more event targets.

From the above architectural diagram, with AWS event bridge we have event sources, and state change to those resources gets sent as events to what we call an event bridge event bus. Information is then processed by rules and those rules can then send information through to various destinations.

Let’s a gain look at another example, let’s say we have an EC2 instance as an event source and an event happens. That event is a termination event of an ec2 instance that gets forwarded to the event bridge Event Bus. A rule gets processed, and that rule then gets sent through to a destination, in this case, an SNS topic after which an SNS notification gets sent through to an email address

Terms associated with Event Bridge

Events: An event indicates a change in an environment. For. e.g. Change of an EC2 instance from pending to running.

Rules: Incoming events route to targets only if they match the rule that is specified.

Targets: A target can be Lambda functions, Amazon EC2 instances, Amazon Kinesis Data Streams, SNS, SQS, Pipelines in CICD, Step Functions, etc that receive events in JSON format.

Event Buses: The Event Bus receives an event. When you create a rule, you associate it with a specific event bus, and the rule is matched only to events received by that event bus.

When an event is generated by an event source, it is sent to the Event Bridge event bus. If the event matches one or more rules that you’ve defined, EventBridge forwards the event to the corresponding event targets.

Now let’s make our hands dirty.

Log into the AWS management console, launch an instance, and copy the instance ID.

Then in the search box, type Event Bridge and select Event Bridge under services. In the event bridge dashboard click Create rule.

In the Create Rule dashboard, give your rule a name. call it EC2 state change. For the event bus, choose default, then toggle the enable rule on the selected event bus. Click next.

For event sources, select AWS events. Scroll down, under creation method select use pattern form.

Under event pattern for event source select the dropdown button and select AWS service. then for AWS service, select EC2.

For Event-type select EC2 instance state-change notification. For event type specification 1, select a specific state. Then select the drop-down button and select terminated.

For event type specification 2 select specific instance Id. Then copy and paste the instance ID you copied then click next.

Next, scroll down, and let’s specify a target. Choose SNS. For topic go ahead and create your SNS topic. I already have my SNS topic called Email notification.

These are the only settings we need to click Create rule.

So that rule is enabled, let’s go and terminate our EC2 instance. And see what happens.

So back in EC2, click instance state and terminate your instance.

Go to your email and confirm if you’ve received a notification. Here is the notification have received.

This brings us to the end of this demo. Stay tuned for more.

Make sure to clean your resources to avoid surprise bills.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at. [email protected].

Thank you!

Categories
Blogs

Amazon Inspector: Automated and continual vulnerability management at scale

Amazon Inspector: Automated and continual vulnerability management at scale.

What is Amazon Inspector?

Amazon Inspector automatically discovers workloads, such as Amazon EC2 instances, containers, and Lambda functions, and scans them for software vulnerabilities and unintended network exposure.

How it works.

Amazon Inspector is an automated vulnerability management service that scans AWS workloads for software vulnerabilities and unintended network exposure. At a high level, AWS Inspector is dependent on an SSM agent to be installed in the EC2 instance that will be used to scan and report the security findings. Additionally, the EC2 will need a role that grants SSM to the EC2 instance. AWS Inspector uses the SSM agent to connect to the instance. Note that as of November of 2023, there is a new agentless scan option that is in preview.

AWS Inspector can be supported at the organizational level and scan all accounts in the organization; however, the scope of this blog will be a single account.

In this use case, we will see how Amazon Inspector helps to identify the network vulnerability by performing an accessibility check.

Let’s proceed as follows.

Login to your AWS account using your admin account or an account with admin privileges.

Creation of a new IAM role for the EC2 use case

Remember, the AWS Inspector is dependent on an SSM role or a role with SSM permissions to be able to communicate with the SSM agent inside the EC2. Let’s create this role.

Navigate to IAM, click on Roles then click on the Create role button. You will be prompted to select an entity type, in our case select “AWS service”, and for service select EC2. This will create a trust policy that will allow EC2 to assume it. Click Next.

Now we must add permissions to the role that the EC2 will assume. Under the Add permissions config screen, search for AmazonSSMManagedInstanceCore, select it, and click Next.

In the next window, give the role a name (SSMRoleForInspector) and click Create role.

Creation of EC2 Instance

Go to the AWS Management Console, select instances in the EC2 console, then click Launch instances in the instances dashboard shown below.

Add the Name as nameEC2Demo, and under instance type, select t2. Micro which is free tier eligible. Scroll down.

Under application and OS, select the QuickStart tab then select Amazon Linux. Under AMI select Linux 2 AMI this is also free tier eligible.

Scroll down to the firewall and security section, Select the existing security group then choose the default security group. Click on the launch instance.

Attach the SSM agent role to the EC2 instance.

Select your instance, click on Actions, then click Security, and then click on Modify IAM role.

Within the Modify IAM role screen, select the role you created earlier. In my case, I am selecting SSMRoleForInspector Click on Update IAM role.

Select your instance then move to the security tab, Select the default security group. Then click edit the inbound rule.

Click Add a new rule and open the port21 to anywhere from the internet then click Save rule.

Note: Port 21 is not recommended to keep open on our instances. We are inducing a security thread here.

Running an Amazon Inspector -To Discover the security vulnerabilities.

Go to the management console under services and select Amazon Inspector. Then click Get Started.

Activate Inspector and view your permissions.

Once Inspector is activated, we will get a green banner as our first scan is underway.

Go to Account Management, then move to the Instances tab and select unmanaged instances. You will see the below message.

This means this instance is not managed by SSM. Please click on the instructions hyperlink to remediate the issue.

Eventually, it will redirect to AWS Systems Manager, in input parameters, choose the instance ID and Click Execute.

Once execution is completed, Go to Amazon Inspector and select the instance findings.

Go to findings, where you will see the induced security thread as high.

Go to EC2 Instances, Inbound security group, and delete the induced port21. Click Save rules.

To review the findings again, Go to Account Management, Instances, and select unmanaged instances. Follow the instructions like giving out instance ID etc. Click execute.

With this, we have seen now how Amazon Inspector helps to find the Network Reachability vulnerability.

to avoid Billing, terminate the instances that you had created and make sure you deactivate the Amazon inspector for all instances.

This brings us to the end of this demo. Stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at. [email protected].

Thank you!

Categories
Blogs

Monitoring AWS API Activity with AWS CloudTrail

Monitoring AWS API Activity with AWS CloudTrail

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. AWS CloudTrail is a service that you can use to capture information about the API actions happening in your AWS account, AWS SDKs, command line tools, and other AWS services.

What is CloudTrail?

CloudTrail continuously monitors and logs account activity across all AWS services, including actions taken by a user, role, or AWS service. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.

Why Use CloudTrail?

Here are some key reasons to use CloudTrail:

Audit Compliance: CloudTrail logs provide detailed records of all API calls, which can be used to audit compliance.
Security Analysis: The API call logs can be analyzed to detect anomalous activity and unauthorized access to determine security issues.
Operational Issues: The activity history can help troubleshoot operational issues by pinpointing when an issue began and what actions were taken.
Resource Changes: You can identify what changes were made to AWS resources by viewing CloudTrail events.

CloudTrail Log Files

CloudTrail log files contain the history of API calls made on your account. These log files are stored in Amazon S3 buckets that you specify. You can define S3 buckets per region or use the same bucket for all regions.

The log files capture API activity from all Regions and are delivered every 5 minutes. You can easily search and analyze the logs using Amazon Athena, Amazon Elasticsearch, and other tools.

CloudTrail Events

CloudTrail categorizes events into two types:

Management events

Provides information about management operations performed on resources in your AWS account. These include operations like creating, modifying, and deleting resources.

Data events

Provides information about resource operations performed on or in a resource. These include operations like Amazon S3 object-level API activity.

By default, AWS logs and retains management events for a period of 90 days. but this timeframe might need to be revised for your requirements. To overcome this, you can create a CloudTrail trail, enabling you to log events in S3 for indefinite retention. Each trail you create can be region-specific or it can be applied to all regions. Furthermore, you can leverage CloudWatch events to trigger actions based on API calls that are made and logged in CloudTrail.

Using information generated by CloudTrail.

In the above architecture, we have AWS CloudTrail that will log API actions for 90 days. We can then choose to create a trail and log our events to Amazon S3 indefinitely. Furthermore, we can also enable log file integrity validation. This checks whether the events that are being logged have been tampered with or not, hence ensuring the accuracy of logged events for auditing and compliance since we need to ensure that the information is accurate and has not been modified. Additionally, we can also trigger notifications through SNS topics upon log file publication. We can also forward data to CloudWatch logs, enabling actions like setting alarms or using subscription filters. Alarms triggered by CloudWatch logs can execute Lambda functions or notify through SNS topics. Again, forwarding information to CloudWatch Events can trigger Lambda functions based on API actions. So, we see there are lots of ways we can use the information generated by CloudTrail.

Hands-on creation of CloudTrail trail.

Log into the management console then in the search box under services, type CloudTrail, then select CloudTrail under services.
In the CloudTrail dashboard click Create a Trail.
Under trail name, give it a name and call it management events. We will create this trail only for this account so we will not tick the box for enable for all accounts in my organization.

It’s going to need a storage location, and by default, it will create an S3 bucket. And give it a unique name. So, we will leave that as the default.
To encrypt the information in your bucket select the new key and call it CloudTrail.

Logfile validation is enabled by default Scroll down.
Under CloudWatch, enable it. CloudTrail will need a role to send a trail to CloudWatch so select a new role and give it a name then scroll down and click next.
Under the type of event, we will move with the management events.
API activity will be read and write then click next, review and click create a trail.
We have successfully created a trail and we can see its status is logging.
This brings us to the end of this blog. Cleanup.

stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!

Categories
Blogs

Benefits of Using AWS Secrets Manager Part One

Safeguarding Your Secrets: The Importance of Using AWS Secrets Manager Part One

What is a Secrets Manager?

Secrets Manager is a specialized tool or service designed to securely store, manage, and retrieve sensitive information. In addition, it enables us to replace embedded credentials in our code, including passwords with an API call to Secrets Manager to retrieve the secret programmatically. This ensures that anyone examining our code cannot compromise the secrets, as they no longer exist in the application code. Additionally, the secrets are independent of the development of the application. Furthermore, we can configure the Secrets Manager to automatically rotate the secret for us according to a specified schedule. Consequently, this allows us to replace long-term secrets with short-term ones, thereby significantly reducing the risk of compromise.

Key Benefits of Using a Secrets Manager:

  • Enhanced Security
  • Centralized Management
  • Automated Rotation
  • Audit Trails

The concepts required to understand AWS Secret Manager.

Secret– It consists of secret information, the secret value, plus metadata about the secret. A secret value can be a string or binary. To store multiple string values in one secret, we recommend that you use a JSON text string with key/value pairs.

A secret’s metadata includes: An Amazon Resource Name (ARN)
Version – A secret has versions that hold copies of the encrypted secret value. Moreover, when you change the secret value or rotate the secret, Secrets Manager creates a new version. Secrets Manager doesn’t store a linear history of secrets with versions. Alternatively, it keeps track of three specific versions by labeling them: The current version — AWSCURRENT The previous version — AWSPREVIOUS The pending version (during rotation) — AWSPENDING
Rotation – Rotation is the process of periodically updating a secret to make it more difficult for an attacker to access the credentials. In Secrets Manager, you can set up automatic rotation for your secrets. Additionally, when Secrets Manager rotates a secret, it updates the credentials in both the secret and the database or service.
Rotation strategy– Secrets Manager offers two rotation strategies:
Single User: This strategy updates credentials for one user in one secret. The user must have permission to update their password. This is the simplest rotation strategy, and it is appropriate for most use cases.
Alternating Users: This strategy updates credentials for two users in one secret. In addition, you create the first user, and during the first rotation, the rotation function clones it to create the second user. Every time the secret rotates, the rotation function alternates which user’s password it updates. However, most users lack permission to clone themselves, so you must provide the credentials for a superuser in another secret.

Who Can Use Secrets Manager

Mainly the users of Secrets Manager can have one of the below-mentioned roles:
IT Admins: If you are an IT Admin who is responsible for storing and managing secrets.
Security Admin: As a Security Admin responsible for ensuring regulatory and compliance requirements, you can use Secrets Manager. Furthermore, you can audit and monitor secret usage and ensure necessary secret rotation.
Developer: If you are a developer, you can onboard the Secrets Manager so that you don’t have to worry about managing secrets.

Features

Rotate Secrets Safely: Without worrying about updating or deploying the code, you can easily rotate secrets.
Manage Access with Fine-grained Policies: Certain Identity and Access Management (IAM) policies enable the management of access to the secrets. For example, you can create a policy that enables developers to access the secrets during development purposes.
Secure and audit secrets centrally: By encrypting the secrets with encryption keys you can secure your secrets as well. You can easily achieve this by using the Amazon Key Management Service (KMS) to encrypt data.
Pay as you go: The charges will only apply based on the number of secrets managed by the Secrets Manager and the number of Secrets Manager API calls you make.
Retrieve Secrets programmatically: With Secrets Manager, you can programmatically retrieve encrypted secret values at runtime.

Use cases of AWS Secrets Manager?

  • Newer service, meant for storing secrets.
  • Capability to force rotation of secrets every X days.
  • Automate generation of secrets on rotation (uses Lambda).
  • Integration with Amazon RDS (MySQL, PostgreSQL, Aurora).
  • Ability to encrypt secrets using KMS.
  • Mostly meant for RDS integration.
This brings us to the end of this blog. stay tuned for more.

If you have any questions concerning this article or have an AWS project that requires our assistance, please reach out to us by leaving a comment below or email us at [email protected].

Thank you!

Categories
Blogs

How to Store Secrets with Secrets Manager – Part II

How to Store Secrets with Secrets Manager - Part II

Securing secrets is crucial in modern software development, and AWS Secrets Manager serves as a key solution. As a fully managed service from Amazon Web Services (AWS), it effectively safeguards sensitive information, such as API keys and database passwords.

AWS Secrets Manager enables you to centralize and manage access to sensitive data, reducing the risk of unauthorized access. Additionally, it facilitates regular rotation and auditing of secrets, enhancing overall application security. Moreover, its seamless integration with other AWS services simplifies the implementation of security best practices in your infrastructure.
Task: Creating an RDS Database, Managing Credentials with Secrets Manager, and Auditing with CloudTrail.

We will embark on this task by first creating an RDS Database, ensuring seamless management of credentials through Secrets Manager. Subsequently, dive into auditing processes with CloudTrail to maintain a comprehensive and secure AWS environment.

a) Sign in to AWS Management Console and create an RDS MySQL instance.
b) Store a new secret.
c) Verify the secret created.
d) Using cloud trail to monitor secret manager activities.

Hands-on:

a) Sign in to AWS Management Console and create an RDS MySQL instance.
Log into the AWS management console and in the search box, type RDS then select RDS under services.
In the RDS console, click on Create Database.

In the Create database screen, select the following:
Choose a database creation method: Standard create
Engine type: MySQL
Templates: free tier
In the settings and DB instance class tab, fill in the details as follows:
DB instance identifier: SecretManagerLab (any name)
Master username: admin (any username for your database instance)
Master password: dcVRBrxLbhacVU6 (any password for your instance)
DB instance class: db.t2. micro
Note: Make sure to remember the username and password or simply paste in a text file.
In the Storage tab, keep everything as default and make sure to undo the checkbox for Enable storage autoscaling.
In the connectivity tab, make sure that the public access is set to No.
Keep everything else as default. After this click Create Database.

It takes some time for your database to be created.

We’ve created an RDS MySQL instance successfully!
b) Store a new secret.

In the search box, type secrets manager and select secrets manager under services.
In the AWS Secret Manager dashboard, click on Store a new secret.
Now in Secret type, please select Credentials for Amazon RDS database and enter the following details:
User name: (username of our database instance, here we used admin)
Password: (password of our db instance, here we used dcVRBrxLbhacVU6)
Encryption key: (keep it as default)
Select the database instance you created in the previous step, (named SecretManagerLab) and click next.
On the next screen, give the Secret name as any name (LabSecret ) and keep everything else as default. Click next.

Secret ‘LabsSecret’ has been stored, with Secret Manager.

c) Verify the secret created.

Once the secret is created and rotation is configured click on the secret name [LabsSecret]
Now click on the Retrieve secret value button
We can see the details of our secret value including the password.
d) Using cloud trail to monitor secret manager activities.

Search for CloudTrail in the search then select it under services.
In the lookup attributes, select Event name and Enter event name as GetSecretValue.

You can see the user’s name of all the users who tried to access the secret and the event time.
AWS Secrets Manager is a service provided by Amazon Web Services (AWS) that helps you manage and protect sensitive information such as passwords, API keys, and other credentials. It allows you to securely store, access, and rotate these secrets, reducing the risk of unauthorized access and improving overall security for your applications and services.

AWS CloudTrail is an AWS service that helps you enable operational and risk auditing, governance, and compliance of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail.

This brings us to the end of this demo. Make sure to pull down everything.

If you have any questions concerning this article or have an AWS project that requires our assistance, please leave a comment below or email us at [email protected].

Thank you!