Question
stringlengths 0
222
| Description
stringlengths 0
790
| Answer
stringlengths 0
28.2k
| Link
stringlengths 35
92
|
---|---|---|---|
How do I install custom packages in my Amazon MWAA environment? | I want to install custom packages using plugins.zip in my Amazon Managed Workflows for Apache Airflow (Amazon MWAA) environment. | "I want to install custom packages using plugins.zip in my Amazon Managed Workflows for Apache Airflow (Amazon MWAA) environment.Short descriptionYou can install Python libraries in Amazon MWAA using the requirements.txt and plugins.zip files. When you install packages using the requirements.txt file, the packages are installed from PyPi.org, by default. However, if you need to ship libraries (.whl files) with compiled artifacts, then you can install these Python wheels using the plugins.zip file.Using the plugins.zip file, you can install custom Apache Airflow operators, hooks, sensors, or interfaces by simply dropping files inside a plugins.zip file. This file is written on the backend Amazon ECS Fargate containers at the location /usr/local/airflow/plugins/. Plugins can also be used to export environment variables and authentication and config files, such as .crt and .yaml.ResolutionInstall libraries using Python WheelsA Python wheel is a package file with compiled artifacts. You can install this package by placing the (.whl) file in a plugins.zip and then refer to this file in requirements.txt. When you update the environment after adding the .whl file into plugins.zip, the .whl file is shipped to the location /usr/local/airflow/plugins/ in the underlying Amazon Elastic Container Service (Amazon ECS) Fargate containers. To install Python Wheels, do the following:1. Create the plugins.zip file:Create a local Airflow plugins directory on your system by running the following command:$ mkdir pluginsCopy the .whl file into the plugins directory that you created.Change the directory to point to your local Airflow plugins directory by running the following command:$ cd pluginsRun the following command to make sure that the contents have executable permissions:plugins$ chmod -R 755Zip the contents within your plugins folder by running the following command:plugins$ zip -r plugins.zip2. Include the path of the .whl file in the requirements.txt file (example: /usr/local/airflow/plugins/example_wheel.whl).Note: Remember to turn on versioning for your Amazon Simple Storage Service (Amazon S3) bucket.3. Upload the plugins.zip and requirements.txt files into an S3 bucket (example: s3://example-bucket/plugins.zip).4. To edit the environment, open the Environments page on the Amazon MWAA console.5. Select the environment from the list, and then choose Edit.6. On the Specify details page, in the DAG code in Amazon S3 section, do either of the following based on your use case:Note: If you're uploading the plugins.zip or requirements.txt file into your environment for the first time, then select the file and then choose the version. If you already uploaded the file and recently updated it, then you can skip selecting the file and choose only the version.Choose Browse S3 under the Plugins file - optional field.Select the plugins.zip file in your Amazon S3 bucket, and then choose Choose.For Choose a version under Plugins file - optional, select the latest version of the file that you've uploaded.--or--Choose Browse S3 under Requirements file - optional.Select the requirements.txt file in your Amazon S3 bucket, and then choose Choose.For Choose a version under Requirements file - optional, select the latest version of the file that you've uploaded.7. Choose Next, and then choose Save.Install custom operators, hooks, sensors, or interfacesAmazon MWAA supports Apache Airflow’s built-in plugin manager that allows you to use custom Apache Airflow operators, hooks sensors, or interfaces. These custom plugins can be placed in the plugins.zip file using both a flat and nested directory structure. For some examples of custom plugins, see Examples of custom plugins.Create a custom plugin to generate runtime environment variablesYou can also create a custom plugin that generates environment variables at runtime on your Amazon MWAA environment. Then, you can use these environment variables in your DAG code. For more information, see Creating a custom plugin that generates runtime environment variables.Export PEM, .crt, and configuration filesIf you don't need specific files to be continuously updated during environment execution, you can use plugins.zip to ship these files. Also, you can place files for which you don't need to grant access to users that write DAGs (example: certificate (.crt), PEM, and configuration YAML files). After you zip these files into plugins.zip, upload plugins.zip to S3, and then update the environment. The files are replicated with required permissions to access /usr/local/airflow/plugins.You can zip custom CA certifcates into the plugins.zip file by running the following command:$ zip plugins.zip ca-certificates.crtThe file is now located at/usr/local/airflow/plugins/ca-certificates.crt.To zip the kube_config.yaml into the plugins.zip file, run the following command:$ zip plugins.zip kube_config.yamlThe file is now located at/usr/local/airflow/plugins/kube_config.yaml.Troubleshoot the installation processIf you have issues during the installation of these packages, you can test your DAGs, custom plugins, and Python dependencies locally using aws-mwaa-local-runner.To troubleshoot issues with installation of Python packages using the plugins.zip file, you can view the log file (requirements_install_ip) from either the Apache Airflow Worker or Scheduler log groups.Important: It's a best practice to test the Python dependencies and plugins.zip file using the Amazon MWAA CLI utility (aws-mwaa-local-runner) before installing the packages or plugins.zip file on your Amazon MWAA environment.Related informationPluginsFollow" | https://repost.aws/knowledge-center/mwaa-install-custom-packages |
How do I assign an existing IAM role to an EC2 instance? | I have an AWS Identity and Access Management (IAM) role that I want to assign to an Amazon Elastic Compute Cloud (Amazon EC2) instance. | "I have an AWS Identity and Access Management (IAM) role that I want to assign to an Amazon Elastic Compute Cloud (Amazon EC2) instance.ResolutionUse either the Amazon EC2 console of the AWS Command Line Interface (AWS CLI) to assign a role. For more information, see Attach an IAM role to an instance.If you use AWS Systems Manager, then wait for AWS Systems Manager Agent (SSM Agent) to detect the new IAM role, or restart SSM Agent. For more information, see Setting up AWS Systems Manager.If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Related informationCreating IAM rolesTutorial: Get started with Amazon EC2 Linux instancesUsing an IAM role to grant permissions to applications running on Amazon EC2 instancesI created an IAM role, but the role doesn't appear in the dropdown list when I launch an instance. What do I do?Follow" | https://repost.aws/knowledge-center/assign-iam-role-ec2-instance |
My EC2 Linux instance failed its system status check. How do I troubleshoot this? | My Amazon Elastic Compute Cloud (Amazon EC2) instance failed its system status check and is no longer accessible. How do I troubleshoot system status check failures? | "My Amazon Elastic Compute Cloud (Amazon EC2) instance failed its system status check and is no longer accessible. How do I troubleshoot system status check failures?Short descriptionSystem status check failures indicate that there is an issue with the hardware hosting your EC2 instance.ResolutionThe instance must be migrated to a new, healthy host by stopping and starting the instance. You can wait for Amazon EC2 to perform the stop and start of your instance. Or, you can manually stop and start the instance to migrate it to a new, healthy host.Note: A stop and start isn't equivalent to a reboot. A start is required to migrate the instance to healthy hardware.Warning: Before stopping and starting your instance, be sure you understand the following:Instance store data is lost when you stop and start an instance. If your instance is instance store-backed or has instance store volumes containing data, the data is lost when you stop the instance. For more information, see Determining the root device type of your instance.If your instance is part of an Amazon EC2 Auto Scaling group, stopping the instance may terminate the instance. If you launched the instance with Amazon EMR, AWS CloudFormation, or AWS Elastic Beanstalk, your instance might be part of an AWS Auto Scaling group. Instance termination in this scenario depends on the instance scale-in protection settings for your Auto Scaling group. If your instance is part of an Auto Scaling group, then temporarily remove the instance from the Auto Scaling group before starting the resolution steps.Stopping and starting the instance changes the public IP address of your instance. It's a best practice to use an Elastic IP address instead of a public IP address when routing external traffic to your instance. If you are using Route 53, you might have to update the Route 53 DNS records when the public IP changes.If the shutdown behavior of the instance is set to Terminate, the instance is terminated if you shut down the instance from the operating system using the shutdown or poweroff command. To avoid this, change the instance shutdown behavior.In rare circumstances, the infrastructure-layer issue can prevent the underlying host from responding to the stop and start API calls. This causes the instance to be stuck in the stopping state. For instructions on how to force the instance to stop, see Troubleshooting stopping your instance.You can create an Amazon CloudWatch alarm that monitors and automatically recovers the EC2 instance from issues that involve underlying hardware failure.Related informationWhy is my EC2 Linux instance unreachable and failing one or both of its status checks?I received a notice stating that Amazon EC2 detected degradation of the underlying hardware hosting my EC2 instance. What do I need to do?Follow" | https://repost.aws/knowledge-center/ec2-linux-system-status-check-failure |
Why am I charged an early delete fee for Amazon S3 Glacier? | I deleted an Amazon Simple Storage Service (Amazon S3) Glacier archive and was charged a fee. | "I deleted an Amazon Simple Storage Service (Amazon S3) Glacier archive and was charged a fee.ResolutionAmazon S3 Glacier storage classes have a minimum storage duration charge. If you delete an archive before meeting the minimum storage duration, you are charged a prorated early deletion fee.Amazon S3 Glacier minimum storage duration depends on the storage class used. For a summary of minimum storage duration for each storage class, see Performance across the S3 storage classes.For information about Amazon S3 Glacier pricing and how early deletion fees are calculated, see Amazon S3 pricing.Related informationAmazon S3 Glacier FAQsFollow" | https://repost.aws/knowledge-center/glacier-early-delete-fees |
How do I troubleshoot API Gateway REST API endpoint 403 "Missing Authentication Token" errors? | "When I try to invoke my Amazon API Gateway REST API, I get 403 "Missing Authentication Token" error messages. How do I troubleshoot these errors?" | "When I try to invoke my Amazon API Gateway REST API, I get 403 "Missing Authentication Token" error messages. How do I troubleshoot these errors?Short descriptionAPI Gateway REST API endpoints return Missing Authentication Token errors for the following reasons:The API request is made to a method or resource that doesn't exist.The API request isn't signed when the API method has AWS Identity and Access Management (IAM) authentication turned on.The API might be configured with a modified Gateway response or the response comes from a backend integration.To troubleshoot other 403 errors for API Gateway, see How do I troubleshoot HTTP 403 errors from API Gateway?ResolutionConfirm that there's a method and resource configured in the API Gateway resource pathFollow the instructions in Set up a method using the API Gateway console. For more information, see Set up API resources.Important: You must deploy the API for the changes to take effect.For APIs with proxy resource integration where the request method is sent to the root resource, verify that there's a method configured under the root resource.Confirm that the API Gateway responses haven't been modified or the backend integration isn't sending the responseMake sure that the gateway responses aren't modified in the API. Also, make sure that the error isn't coming from the integration backend. Check the API Gateway execution logs and backend logs.Confirm that the API request is signed if the API method has IAM authentication turned onFor more information, see Signing AWS API requests and Elements of an AWS API request signature.Confirm that you're sending the correct HTTP method request to the REST API endpointTesting a REST API endpoint from a web browser automatically sends a GET HTTP method request.To test a POST HTTP method request, use a different HTTP client. For example: Postman or curl.Example curl command that uses the POST HTTP method request$ curl -X POST <API URL> -d <request body>Example sending request with JSON header$ curl --location -X POST 'https://1234WXYZ.execute-api.us-east-1.amazonaws.com/stage/lambda_proxy' --header 'Content-Type: application/json' --data-raw '{"x":"y"}'Examples sending curl POST request with AWS V4 signature authentication$ curl -X POST "<ENDPOINT>" -d <data> --user <AWS_ACCESS_KEY>:<AWS_SECRET_KEY> --aws-sigv4 "aws:amz:<REGION>:<SERVICE>"$ curl -X POST "https://1234WXYZ.execute-api.us-east-1.amazonaws.com/stage/lambda_proxy" -d '{"x":"y"}' --user ABCD:1234 --aws-sigv4 "aws:amz:us-east-1:execute-api"Related informationHow do I activate IAM authentication for API Gateway REST APIs?Follow" | https://repost.aws/knowledge-center/api-gateway-authentication-token-errors |
"How do I decouple an Amazon RDS instance from an Elastic Beanstalk environment without downtime, database sync issues, or data loss?" | "I have an Amazon Relational Database Service (Amazon RDS) DB instance attached to my AWS Elastic Beanstalk environment. I want to remove the dependencies between the instance and the environment while avoiding downtime, database sync issues, and data loss." | "I have an Amazon Relational Database Service (Amazon RDS) DB instance attached to my AWS Elastic Beanstalk environment. I want to remove the dependencies between the instance and the environment while avoiding downtime, database sync issues, and data loss.Short descriptionFollow these steps to decouple your database from an Elastic Beanstalk environment without affecting the health of the environment:Create an Amazon RDS DB snapshot.Protect your RDS DB instance from deletion.Create a new Elastic Beanstalk environment.Perform a blue/green deployment.Update the database deletion policy for beanstalk environment A.Decouple the RDS instance from beanstalk environment A.Terminate the old Elastic Beanstalk environment.Important: Attaching an RDS DB instance to an Elastic Beanstalk environment is ideal for development and testing environments. However, it's not recommended for production environments because the lifecycle of the database instance is tied to the lifecycle of your application environment. If you terminate the environment, then you lose your data because the RDS DB instance is deleted by the environment. For more information, see Using Elastic Beanstalk with Amazon RDS.ResolutionCreate an RDS DB snapshotOpen the Elastic Beanstalk console.Choose the Elastic Beanstalk environment that you want to decouple from your RDS DB instance (environment A).In the navigation pane, choose Configuration.For Database, choose Modify.Select Endpoint.Create an RDS DB snapshot of your RDS DB instance.Protect your RDS DB instance from deletionOpen the Amazon RDS console.Choose your database, and then choose Modify.In Deletion protection, select the Enable deletion protection option.Choose Continue.In Scheduling Modifications, choose Apply immediately.Choose Modify DB Instance.Refresh the Amazon RDS console, and then verify that deletion protection is turned on.Create a new Elastic Beanstalk environmentYour new Elastic Beanstalk environment (environment B) must not include an RDS DB instance in the same Elastic Beanstalk application.Note: To perform a blue/green deployment (or CNAME swap) later, make sure that environment A and environment B are part of the same application.Create environment B.Connect environment B to the existing RDS DB instance of environment A.Note: For more information, see Launching and connecting to an external Amazon RDS instance in a default VPC.Verify that environment B connects to the existing RDS DB instance, and that your application works correctly.Perform a blue/green deployment to avoid downtimeOpen the Elastic Beanstalk console for environment B.Swap the environment URLs of the old and new Elastic Beanstalk environments.Note: For more information, see Blue/green deployments with Elastic Beanstalk.Verify that the URL of environment B responds, and that your application works correctly.Important: Don't terminate environment A until the DNS changes are propagated, and your old DNS records expire. DNS records can take up to 48 hours to expire. DNS servers don't clear old records from their cache based on the time to live (TTL) that you set on your DNS records.Update the Database deletion policy for beanstalk environment AOpen the Elastic Beanstalk console for environment A.In the navigation pane, choose Configuration.In Database configuration, choose Edit.In Database settings, set the Database deletion policytoRetain.Choose Apply. It will take a few minutes to save the configuration change.Important: Do not proceed to the next step until the Database deletion policy change is applied to beanstalk environment A.Decouple the RDS instance from beanstalk environment AOpen the Elastic Beanstalk console for environment A.In the navigation pane, choose Configuration.In Database configuration, choose Edit.In Database settings, verify that the Database deletion policyis set toRetain.Go to the Database connection section, and choose Decouple database.Choose Apply to initiate the database decoupling operation.Note: The database remains operational during this period, and it usually takes less than five minutes to decouple a database.Terminate the old Elastic Beanstalk environmentAfter the new environment’s functionality is validated, terminate the old Elastic Beanstalk environment (environment A).When you terminate the environment, all Elastic Beanstalk resources are deleted, except for the RDS DB instance and the RDS security group that were created by Elastic Beanstalk.Follow" | https://repost.aws/knowledge-center/decouple-rds-from-beanstalk |
Why doesn't my MSCK REPAIR TABLE query add partitions to the AWS Glue Data Catalog? | "When I run MSCK REPAIR TABLE, Amazon Athena returns a list of partitions, but then fails to add the partitions to the table in the AWS Glue Data Catalog." | "When I run MSCK REPAIR TABLE, Amazon Athena returns a list of partitions, but then fails to add the partitions to the table in the AWS Glue Data Catalog.Short descriptionHere are some common causes of this behavior:The AWS Identity and Access Management (IAM) user or role doesn't have a policy that allows the glue:BatchCreatePartition action.The Amazon Simple Storage Service (Amazon S3) path is in camel case instead of lower case (for example, userId instead of userid).ResolutionAllow glue:BatchCreatePartition in the IAM policyReview the IAM policies attached to the user or role that you're using to run MSCK REPAIR TABLE. When you use the AWS Glue Data Catalog with Athena, the IAM policy must allow the glue:BatchCreatePartition action. If the policy doesn't allow that action, then Athena can't add partitions to the metastore. For an example of an IAM policy that allows the glue:BatchCreatePartition action, see AmazonAthenaFullAccess managed policy.Change the Amazon S3 path to lower caseThe Amazon S3 path name must be in lower case. If the path is in camel case, then MSCK REPAIR TABLE doesn't add the partitions to the AWS Glue Data Catalog. For example, if the Amazon S3 path is userId, the following partitions aren't added to the AWS Glue Data Catalog:s3://awsdoc-example-bucket/path/userId=1/s3://awsdoc-example-bucket/path/userId=2/s3://awsdoc-example-bucket/path/userId=3/To resolve this issue, use lower case instead of camel case:s3://awsdoc-example-bucket/path/userid=1/s3://awsdoc-example-bucket/path/userid=2/s3://awsdoc-example-bucket/path/userid=3/Related informationPartitioning data in AthenaActions, resources, and condition keys for Amazon AthenaActions, resources, and condition keys for AWS GlueFollow" | https://repost.aws/knowledge-center/athena-aws-glue-msck-repair-table |
How do I troubleshoot 504 errors in CloudFront? | "I'm using an Amazon CloudFront distribution to serve content. However, viewers receive a 504 error when they try to access the content through a web browser. How can I resolve these errors?" | "I'm using an Amazon CloudFront distribution to serve content. However, viewers receive a 504 error when they try to access the content through a web browser. How can I resolve these errors?Short descriptionCloudFront returns two types of 504 errors:504: "Gateway Time-out" errors occur when the error is returned by the origin, and then passed through CloudFront to the viewer.504: "The request could not be satisfied" errors occur when the origin didn't respond to CloudFront in the allotted time frame and so the request expired.Based on the error you receive, see the related resolution section.Resolution504: "Gateway Time-out" errorVerify that the correct ports are open on your security group.Make sure that the origin server allows inbound traffic from CloudFront, typically on port 443 or 80.If your origin uses Elastic Load Balancing, then review the ELB security groups. Make sure that the security groups allow inbound traffic from CloudFront.Verify that the origin server firewall allows connections from CloudFrontDepending on your OS, confirm that the firewall allows traffic for port 443 and 80.If you're using Redhat Linux View, verify that your firewall rules match the following settings.Firewall Rules:$ sudo firewall-cmd --permanent --zone=public --list-portsPermanently Add Rules:$ sudo firewall-cmd --permanent --zone=public --add-port=80/tcp $ sudo firewall-cmd --permanent --zone=public --add-port=443/tcpIf you're using Ubuntu Linux, verify that your firewall rules match the following settings.Ubuntu Linux View Firewall Rules:$ sudo ufw status verbosePermanently Add Rules:$ sudo ufw allow 80$ sudo ufw allow 443If you use Windows Firewall on a Windows server, then see Add or Edit Firewall Rule in the Microsoft documentation.Make sure that your custom server is accessible over the internetIf CloudFront is unable to access your origin over the internet, then CloudFront returns a 504 error. To check that internet traffic can connect to your origin, confirm that your HTTP and HTTPS rules match the following settings.For HTTPS Traffic:nc -zv OriginDomainName/IP_Address 443telnet OriginDomainName/IP_Address 443For HTTP Traffic:nc -zv OriginDomainName 80telnet OriginDomainName 80504: "The request could not be satisfied" errorMeasure the typical and high-load latency of your web applicationUse the following command to measure the responsiveness of your web application:curl -w "DNS Lookup Time: %{time_namelookup} \nConnect time: %{time_connect} \nTLS Setup: %{time_appconnect} \nRedirect Time: %{time_redirect} \nTime to first byte: %{time_starttransfer} \nTotal time: %{time_total} \n" -o /dev/null https://www.example.com/yourobjectNote: For https://www.example.com/yourobject, enter the URL of the web application that you're testing.The output looks similar to the following:DNS Lookup Time: 0.212319 Connect time: 0.371254 TLS Setup: 0.544175 Redirect Time: 0.000000 Time to first byte: 0.703863 Total time: 0.703994Depending on the location of the request, troubleshoot the step that shows high latency.Add resources or tune servers and databasesMake sure that your server has enough CPU, memory, and disk space to handle viewer requests.Set up persistent connections on your backend server. These connections help latency when connections must be re-established for subsequent requests.Adjust the CloudFront timeout valueIf the previous troubleshooting steps didn't resolve the HTTP 504 errors, then update the time that is specified in your distribution for origin response timeout.Related informationHTTP 504 status code (Gateway Timeout)Follow" | https://repost.aws/knowledge-center/cloudfront-troubleshoot-504-errors |
How do I use CloudWatch to view the aggregate Amazon EBS performance metrics for an EC2 instance? | I want to check Amazon Elastic Block Store (Amazon EBS) performance metrics for my Amazon Elastic Compute Cloud (Amazon EC2) instance. | "I want to check Amazon Elastic Block Store (Amazon EBS) performance metrics for my Amazon Elastic Compute Cloud (Amazon EC2) instance.Short descriptionAmazon EC2 instances have a limited bandwidth for Amazon EBS volumes. For an Amazon EBS-optimized instance, EBS I/O traffic uses a dedicated bandwidth. To help you understand whether your instance is under-provisioned or over-provisioned, monitor aggregate performance across all attached EBS volumes. For Nitro instances, use Amazon CloudWatch to view Amazon EBS performance metrics, such as I/O operations per second (IOPS) and throughput.Note: To publish custom CloudWatch metrics for Xen-based instances, refer to the AWS Knowledge Center articles for Linux instances and Windows instances.ResolutionThe following resolution is operating system (OS) agnostic and works for all EC2 instances that are based on the Nitro platform. It uses the EBSReadOps, EBSWriteOps, EBSReadBytes, and EBSWriteBytes metrics in the AWS/EC2 namespace to calculate the following metrics and graph them in CloudWatch. This task uses the metric math feature in CloudWatch.Avg Read IOPS = Sum(EBSReadOps) / PERIODAvg Write IOPS = Sum(EBSWriteOps) / PERIODAvg Total IOPS = (Sum(EBSReadOps) + Sum(EBSWriteOps)) / PERIODAvg Read Throughput = Sum(EBSReadBytes) / PERIODAvg Write Throughput = Sum(EBSWriteBytes) / PERIODAvg Total Throughput = (Sum(EBSReadBytes) + Sum(EBSWriteBytes)) / PERIODThis method graphs the following burst metrics for some *.4xlarge instances. It also graphs these metrics for smaller instances that burst to their maximum performance for only 30 minutes at least once every 24 hours:EBSIOBalance%EBSByteBalance%Graph all relevant metrics1. Open the CloudWatch console. Choose your AWS Region from the navigation bar.2. In the navigation pane, choose Metrics, and then choose All metrics.3. Choose Source, and then enter the following CloudWatch source:{ "metrics": [ [ "AWS/EC2", "EBSIOBalance%", "InstanceId", "INSTANCE_ID", { "id": "m1", "visible": false } ], [ ".", "EBSByteBalance%", ".", ".", { "id": "m2", "visible": false } ], [ ".", "EBSReadOps", ".", ".", { "id": "m3", "stat": "Sum", "visible": false } ], [ ".", "EBSWriteOps", ".", ".", { "id": "m4", "stat": "Sum", "visible": false } ], [ ".", "EBSReadBytes", ".", ".", { "id": "m5", "stat": "Sum", "visible": false } ], [ ".", "EBSWriteBytes", ".", ".", { "id": "m6", "stat": "Sum", "visible": false } ], [ { "expression": "m3/PERIOD(m3)", "label": "Avg Read IOPS", "id": "r_io", "visible": false } ], [ { "expression": "m4/PERIOD(m4)", "label": "Avg Write IOPS", "id": "w_io", "visible": false } ], [ { "expression": "(m3+m4)/PERIOD(m3)", "label": "Avg Total IOPS", "id": "t_io" } ], [ { "expression": "(m5/PERIOD(m5))/1024^2", "label": "Avg Read Throughput (MiB/s)", "id": "r_tp", "visible": false } ], [ { "expression": "(m6/PERIOD(m6))/1024^2", "label": "Avg Write Throughput (MiB/s)", "id": "w_tp", "visible": false } ], [ { "expression": "((m5+m6)/PERIOD(m5))/1024^2", "label": "Avg Total Throughput (MiB/s)", "id": "t_tp" } ] ], "view": "timeSeries", "stacked": false, "period": 300, "title": "EC2 aggregate EBS graphs"}Note: Replace INSTANCE_ID with your instance ID.4. Choose Update.5. In the Graphed metrics tab, select the check box next to the metric that you want to view.6. (Optional) To set an alarm for any of these metrics, choose the bell icon under the Actions column.If the instance isn't running in the same Region as the Region that's selected in the CloudWatch console, then you see blank graphs.By default, EC2 metrics are available at 5-minute intervals with basic monitoring. For 1-minute resolution, turn on detailed monitoring. EBSIOBalance% and EBSByteBalance% metrics are available only for basic monitoring.Follow" | https://repost.aws/knowledge-center/ebs-aggregate-cloudwatch-performance |
Why do I get a client SSL/TLS negotiation error when I try to connect to my load balancer? | I receive a Secure Sockets Layer (SSL)/Transport Layer Security (TLS) negotiation error when I try to connect to my load balancer. Why am I getting this error? | "I receive a Secure Sockets Layer (SSL)/Transport Layer Security (TLS) negotiation error when I try to connect to my load balancer. Why am I getting this error?Short descriptionA client TLS negotiation error means that a TLS connection initiated by the client was unable to establish a session with the load balancer. TLS negotiation errors occur when clients try to connect to a load balancer using a protocol or cipher that the load balancer's security policy doesn't support. To establish a TLS connection, be sure that your client supports the following:One or more matching ciphersA protocol specified in the security policyResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Identify your load balancer's security policyFrom the AWS Management Console:1. Open the Amazon Elastic Compute Cloud (Amazon EC2) console.2. On the navigation pane, under LOAD BALANCING, choose Load Balancers.3. Select the load balancer, and then choose Listeners.4. View the security policy.For Application Load Balancers and Network Load Balancers, find the security policy in the Security policy column.For Classic Load Balancers, choose Change in the Cipher column to view the security policy.From the AWS CLI:For Application Load Balancers and Network Load Balancers, run the describe-listeners commandFor Classic Load Balancers, run the describe-load-balancers commandDetermine that protocols and ciphers that are supported by your load balancer's security policyClassic Load Balancers support custom security policies. However, Application Load Balancers and Network Load Balancers don't support custom security policies. For more information about security policies, including the default security policy, see the following:Application Load Balancer security policiesNetwork Load Balancer security policiesClassic Load Balancer security policies(Optional) Test your load balancer's security policyTo test the protocols and ciphers that are supported by your load balancer’s security policy, use an open source command line tool such as sslscan.Using the sslscan commandYou can install and run the sslscan command on any Amazon EC2 Linux instance or from your local system. Make sure that the load balancer that you want to test accepts TLS connections from your source IP address. To use sslscan on an Amazon Linux EC2 instance:1. Enable the Extra Packages for Enterprise Linux (EPEL) repository.2. Run the sudo yum install sslscan command.3. Run the following command to scan your load balancer for supported ciphers. Be sure to replace example.com with your domain name.[ec2-user@ ~]$ sslscan --show-ciphers example.comUsing the openssl commandOr, you can also test your load balancer's security policy by using the openssl command. You can run the openssl command on any Amazon EC2 Linux instance or from your local system.To list the supported ciphers for a particular SSL/TLS version, use the openssl ciphers command:*$* openssl ciphers -vFor example, and following command would show ciphers supported by TLS version TLSv1.2:*$* openssl ciphers -V | grep "TLSv1.2"Use the s_client command to test TLS versions and cipher suites. To find the strength of particular cipher suites, you can use a website repository such as ciphersuites.info. For example, the following command would show ciphers for www.example.com:openssl s_client -connect example.com:443For example, the suite TLS_PSK_WITH_AES_128_CBC_SHA is considered weak. If you use the suite against a server, you receive the following error:openssl s_client -connect example.com:443 -cipher PSK-AES128-CBC-SHA -quiet 140062732593056:error:140740B5:SSL routines:SSL23_CLIENT_HELLO:no ciphers available:s23_clnt.c:508:The suite ECDHE-RSA-AES128-GCM-SHA256 is considered strong. If you use the suite against the server, you receive a success message similar to the following:openssl s_client -connect example.com:443 -cipher ECDHE-RSA-AES128-GCM-SHA256 New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256Server public key is 2048 bitSecure Renegotiation IS supportedCompression: NONEExpansion: NONENo ALPN negotiatedSSL-Session:Protocol : TLSv1.2Cipher : ECDHE-RSA-AES128-GCM-SHA256Session-ID: 73B49649716645B90D13E29656AEFEBF289A4956301AD9BC65D4832794E282CDSession-ID-ctx:Master-Key: C738D1E7160421281C4CAFEA49941895430168A4028B5D5F6CB6739B58A15235F640A5D740D368A4436CCAFD062B3338Key-Arg : NoneKrb5 Principal: NonePSK identity: NonePSK identity hint: NoneStart Time: 1647375807Timeout : 300 (sec)Verify return code: 0 (ok)You can also use the openssl command to specify the version of the TLS protocol used in the connection. The following example shows a test that verifies that TLS 1.1 is supported by the server:openssl s_client -connect example.com:443 -tls1_1 -quiet depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root G2verify return:1depth=1 C = US, O = DigiCert Inc, CN = DigiCert Global CA G2verify return:1depth=0 CN = *.peg.a2z.comverify return:1Update your load balancer's security policy, if necessaryTo update your load balancer's security policy to use supported protocols or ciphers and achieve the desired level of security, perform the following actions:Update an Application Load Balancer security policyUpdate a Network Load Balancer security policyUpdate a Classic Load Balancer security policyFollow" | https://repost.aws/knowledge-center/elb-fix-ssl-tls-negotiation-error |
How do I troubleshoot a failed step in Amazon EMR? | I want to troubleshoot a failed step in my Amazon EMR cluster. | "I want to troubleshoot a failed step in my Amazon EMR cluster.Short descriptionAmazon EMR identifies and returns the root cause of step failures for steps submitted using the Step API operation. Amazon EMR 5.x and later also returns the name of the relevant log file and a portion of the application stack trace through API.Note: You can use the following information to troubleshoot an Amazon EMR step of any application. For information specific to failed Apache Spark steps, see How do I troubleshoot a failed Spark step in Amazon EMR?ResolutionNote: For descriptions of the types of step logs, see Check the step logs.View step logs using the AWS Management consoleFor more information, see To view failure details using the AWS console in Enhanced step debugging.View step details using the AWS CLINote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent version of the AWS CLI.Use the describe-step command as shown in the following example. In the following command, replace cluster-id and the step-id with the correct values for your use case.aws emr describe-step --cluster-id j-1K48XXXXXHCB --step-id s-3QM0XXXXXM1WFor more information, see To view failure details using the AWS CLI in Enhanced step debugging.View step log files on the master node using SSHFor more information, see View log files on the master node.View log files archived in Amazon S31. Open the Amazon Simple Storage Service (Amazon S3) console.2. Select the S3 bucket specified as the S3 log URI where log files are archived.3. Navigate to the following path and download the log file object: cluster-id/steps/step-id/.For more information, see View log files archived to Amazon S3.View step logs in the debugging toolNote: The debugging tool isn't turned on automatically in Amazon EMR. For information on turning on the debugging tool, see Turn on the debugging tool.For information on viewing step logs in the debugging tool, see View log files in the debugging tool.Related informationHow do I cancel an Amazon EMR step?Follow" | https://repost.aws/knowledge-center/emr-troubleshoot-failed-step |
Why isn't my Lambda function triggered even though it's configured with SQS as the event source? | I've configured my AWS Lambda function to process messages in an Amazon Simple Queue Service (Amazon SQS) queue. But my Lambda function isn't invoked and doesn't process messages in the queue. | "I've configured my AWS Lambda function to process messages in an Amazon Simple Queue Service (Amazon SQS) queue. But my Lambda function isn't invoked and doesn't process messages in the queue.ResolutionFirst confirm that the Lambda function is configured with Amazon SQS as the event source. Then, confirm that the Lambda function's execution role has the permissions it needs to fetch messages from the SQS queue.Check the Amazon CloudWatch metrics for your function for invocations. If you still don't see any invocations on the Lambda function, then use the following troubleshooting steps to find the root cause.Check that the Lambda function and SQS queue URLs are correctConfirm that the Lambda function URL and the SQS queue URL in the event source mapping on the Lambda function are correct. Also, make sure that event source mapping is turned on.1. Navigate to the function page ( $Latest/specific version/Alias), and then choose the SQS trigger in the UI. This lists the triggers.2. Expand the SQS trigger, and check if the SQS queue URL matches the expected queue. Also, confirm if the trigger status is Enabled.3. If the trigger is not turned on, then choose Edit, and then set the trigger back to the Enabled state.-or-Perform these checks by calling the list-event-source-mapping command in Lambda. Be sure to add the --function-name to the command.See the following example:aws lambda list-event-source-mappings --function-name <my-function> --region <region-name>Note: Replace <my-function> and <region name> with the names of your Lambda function and AWS Region.Check the Lambda function permissionsIf the Lambda execution role has permission to poll messages from the SQS queue, then check the SQS queue's access policy. Look for deny rules that might be blocking the Lambda function from accessing the queue and fetching messages.1. Log in to the Amazon SQS console, and then choose Access Policy.2. Check if there are any deny policies blocking the traffic from Lambda. Then, add a condition in the deny statement to ignore requests coming from Lambda when denying requests to the SQS queue.Check if the SQS queue is encryptedConfirm if the SQS queue is encrypted with an AWS Key Management Service (AWS KMS) key. If the queue is encrypted, then the Lambda function's execution role needs the permissions to perform AWS KMS actions. Without these permissions, the Lambda function can't consume messages from the SQS queue.Check if the specific Lambda function is observing throttlesLambda has a Regional concurrency limit. If other functions in the Region actively use this capacity to its maximum, then throttles can occur on the function. This can happen even if the function itself is not reaching the maximum capacity.If you set a reserved concurrency on the function, then no invocations occur on the function and all messages from Amazon SQS are throttled.Check the Regional ConcurrentExecutions (maximum) metric, and the function's Throttle (SUM) metrics in Amazon CloudWatch. Verify whether the Regional capacity is reached and whether there are any throttles on the function. Make sure that there's enough capacity for the function to be invoked and process SQS messages.Confirm that there are no other active consumers on the same SQS queueIf there is more than one active customer on the SQS queue, then your messages might be consumed by these customers. SQS messages are designed to be consumed by only one customer at a time. So, if another customer is consuming the SQS queue, then your Lambda function might not receive any messages when it polls the SQS queue.Verify whether any other Lambda triggers or Amazon Simple Notification Service (Amazon SNS) triggers are active in your SNS queue using the Amazon SQS console.Note: Other customers might be pulling messages from the SNS queue programmatically. These pulls don't appear in the console.Check if Lambda function and SQS are in different Regions of the same accountLambda allows you to configure only same-Region SQS queues as the source in event source mapping.Check if the Lambda function and SQS are in different accountsWith Lambda, your Lambda function and SQS queue can exist in different AWS accounts as long as they're in the same AWS Region. But, you might need additional permissions and configurations to work with this cross-account setup.Check if the SQS event source is configured with filtersCheck if the SQS event source is configured with any filters. If your SNS event source is configured with filters, then make sure that the filters aren't filtering out some or all Amazon SQS messages.1. Log in to the Lambda function console, and then choose the SQS trigger.2. Expand the SQS trigger configuration, and then verify the Filter criteria. If the trigger configuration doesn't show they key name, then this indicates that no filter is configured.3. If a filter criteria is configured, then review the filter to confirm that it's allowing valid messages to be processed by Lambda.4. To temporarily remove the filter criteria, choose Edit beside the SQS trigger details.5. If the function is invoked after you remove the filter, then modify the filter criteria to match your use case.For more information, see Properly filtering Amazon SQS messages and Best practices for implementing Lambda event filtering.Further troubleshootingIf your function is invoked but isn't scaling as expected and is causing a backlog in your queue, then see Why isn't my Lambda function with an Amazon SQS event source scaling optimally?If your function is invoked but certain messages go to the dead-letter queue (DLQ) without being processed, then see Why is my Lambda function retrying valid Amazon SQS messages and placing them in my dead-letter queue?Related informationUsing Lambda with Amazon SQSFollow" | https://repost.aws/knowledge-center/lambda-sqs-event-source |
How do I determine the active SSL security policy associated with my ELB listener using the AWS CLI? | I want to use the AWS Command Line Interface (AWS CLI) to determine the active security policy on my HTTPS or SSL listener on Elastic Load Balancing (ELB). How can I do this? | "I want to use the AWS Command Line Interface (AWS CLI) to determine the active security policy on my HTTPS or SSL listener on Elastic Load Balancing (ELB). How can I do this?Short descriptionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.You can add an HTTPS listener to your load balancer for Amazon Elastic Compute Cloud (Amazon EC2). The SSL security policy for the listener is displayed in the Amazon EC2 console. For instructions, see update SSL negotiation configuration for an HTTPS/SSL load balancer.You can display the load balancer listener SSL security policy names and predefined SSL security policies with the AWS CLI command describe-load-balancer-policies similar to the following:aws elb describe-load-balancer-policies --load-balancer-name EXAMPLEELB --query "PolicyDescriptions[?PolicyTypeName==`SSLNegotiationPolicyType`].{PolicyName:PolicyName,ReferenceSecurityPolicy:PolicyAttributeDescriptions[0].AttributeValue}" --output tableExample output:-------------------------------------------------------------------------------------| DescribeLoadBalancerPolicies |+-------------------------------------------------------+---------------------------+| PolicyName | ReferenceSecurityPolicy |+-------------------------------------------------------+---------------------------+| ELBSecurityPolicy-2015-05 | ELBSecurityPolicy-2015-05 || AWSConsole-SSLNegotiationPolicy-TESTELB-1446825761570 | ELBSecurityPolicy-2015-03 || AWSConsole-SSLNegotiationPolicy-TESTELB-1446774067525 | ELBSecurityPolicy-2015-05 || AWSConsole-SSLNegotiationPolicy-TESTELB-1446774575245 | ELBSecurityPolicy-2015-05 || AWSConsole-SSLNegotiationPolicy-TESTELB-1447023243695 | ELBSecurityPolicy-2014-10 || AWSConsole-SSLNegotiationPolicy-TESTELB-1447039877149 | false || AWSConsole-SSLNegotiationPolicy-TESTELB-1447039147749 | ELBSecurityPolicy-2011-08 || AWSConsole-SSLNegotiationPolicy-TESTELB-1447102065672 | ELBSecurityPolicy-2014-10 |+-------------------------------------------------------+---------------------------+Note: A ReferenceSecurityPolicy value of false indicates that the policy was not created using one of the predefined security policies.The output returns the SSL security policies associated with a load balancer listener, but doesn't indicate which policy is active.ResolutionTo determine the active load balancer listener SSL security policy and any associated predefined SSL security policies:1. Run the AWS command describe-load-balancers similar to the following:aws elb describe-load-balancers --load-balancer-name EXAMPLEELB --query "LoadBalancerDescriptions[*].{ActivePolicy:ListenerDescriptions}" --output tableExample output:---------------------------------------------------------------------------------| DescribeLoadBalancers ||| ActivePolicy ||||| Listener |||||+-------------------+-------------------------------------------------------+||||| InstancePort | 443 |||||| InstanceProtocol | SSL |||||| LoadBalancerPort | 443 |||||| Protocol | SSL |||||| SSLCertificateId | arn:aws:iam::803981987763:server-certificate/ELBSSL |||||+-------------------+-------------------------------------------------------+||||| PolicyNames |||||+---------------------------------------------------------------------------+||||| AWSConsole-SSLNegotiationPolicy-TESTELB-1447102065672 |||||+---------------------------------------------------------------------------+|||| ActivePolicy ||||| Listener |||||+-----------------------------------------------------+---------------------+||||| InstancePort | 80 |||||| InstanceProtocol | HTTP |||||| LoadBalancerPort | 80 |||||| Protocol | HTTP |||||+-----------------------------------------------------+---------------------+||The output returns the policy names associated with the load balancer.2. To get more information about a policy, run the AWS CLI command describe-load-balancer-policies similar to the following:aws elb describe-load-balancer-policies --load-balancer-name EXAMPLEELB --policy-name AWSConsole-SSLNegotiationPolicy-TESTELB-1447102065672 --query "PolicyDescriptions[0].{ReferenceSecurityPolicy:PolicyAttributeDescriptions[0].AttributeValue}" --output tableExample output:----------------------------------------------------------| DescribeLoadBalancerPolicies |+--------------------------+-----------------------------+| ReferenceSecurityPolicy | ELBSecurityPolicy-2014-10 |+--------------------------+-----------------------------+Related informationUpdate the SSL negotiation configuration using the consoleFollow" | https://repost.aws/knowledge-center/elb-listener-policy-cli |
How do I make my Amazon OpenSearch Service domain more fault tolerant? | "I want to protect Amazon OpenSearch Service resources against accidental deletion, application or hardware failures, or outages. What are some best practices for improving fault tolerance or restoring snapshots?" | "I want to protect Amazon OpenSearch Service resources against accidental deletion, application or hardware failures, or outages. What are some best practices for improving fault tolerance or restoring snapshots?Short descriptionTo improve the fault tolerance of an OpenSearch Service domain, consider the following best practices:Take regular index snapshots.Use Amazon CloudWatch metrics to monitor OpenSearch Service resources.Understand OpenSearch Service limits.Use dedicated master nodes.Use at least three nodes.Enable zone awareness.Don't use T2 instances in production environments.ResolutionTake regular index snapshotsAll OpenSearch Service domains take automated snapshots. Take manual index snapshots to create point-in-time backups of the data in an OpenSearch Service domain. Store the snapshots in an Amazon Simple Storage Service (Amazon S3) bucket. You can also use manual index snapshots to migrate data between OpenSearch Service domains or to restore data to another OpenSearch Service domain.Monitor Amazon CloudWatch metricsUse the Cluster health and Instance health tabs in the OpenSearch Service console to monitor the Amazon CloudWatch metrics for your clusters.Create Amazon CloudWatch alarms for important OpenSearch Service metrics. For example, monitor the AutomatedSnapshotFailure metric to confirm that automated snapshots are happening at regular intervals. For a tutorial, see Get started with OpenSearch Service: Set CloudWatch alarms on key metrics.Use dedicated master nodesDedicated master nodes help prevent problems that are caused by overloaded nodes. Use dedicated master nodes when:Your domain is used in production environments.Your domain has five or more nodes.Your index mapping is complex, with many fields defined across types and indices.Use at least three nodesTo avoid an unintentionally partitioned network (split brain), use at least three nodes. To avoid potential data loss, be sure that you have at least one replica for each index. (By default, each index has one replica.)Enable zone awarenessZone awareness helps prevent downtime and data loss. When zone awareness is enabled, OpenSearch Service allocates the nodes and replica index shards across two or three Availability Zones in the same AWS Region.Note: For a setup of three Availability Zones, use two replicas of your index. If there is a single zone failure, the two replicas afford 100% data redundancy.Don't use T2 instances in production environmentsFor production environments, use M-class or larger Amazon Elastic Compute Cloud (Amazon EC2) instances. If you use T2 instance types, be sure to monitor the CPU credits, CPU usage, memory usage, and stability of your instances. Scale up or out when necessary.Additionally, note the following limitations for T2 instances:T2 instances are assigned CPU credits. If there is a spike in network traffic, your OpenSearch Service cluster could exceed the amount of CPU credits available in your T2 instance. For more information, see CPU credits and baseline utilization for burstable performance instances.T2 instances have an EBS volume limit of 35 GB.T2 instances have a payload limit of 10 MB. Make sure that your request payload doesn't exceed the payload limit. For more information about OpenSearch Service network limits, see Network limits.T2 instance types can be used only if your OpenSearch Service instance count is ten or fewer. For more information about the supported OpenSearch Service instance types, see Supported instance types.T2 instance types must not be used as data nodes or dedicated master nodes. T2 instance types can become unstable under sustained heavy load. For more information, see OpenSearch Service best practices.Related informationGet started with Amazon OpenSearch Service: How many data instances do I need?Creating and managing Amazon OpenSearch Service domainsFollow" | https://repost.aws/knowledge-center/opensearch-fault-tolerance |
How can I change the flow of my Amazon Lex bot using an initialization/validation or fulfillment AWS Lambda function? | I want to change the flow of my Amazon Lex bot using an initialization/validation or fulfillment AWS Lambda function. How can I do this using the Amazon Lex console? | "I want to change the flow of my Amazon Lex bot using an initialization/validation or fulfillment AWS Lambda function. How can I do this using the Amazon Lex console?Short descriptionYou can use initialization/validation or fulfillment AWS Lambda code hooks to make changes to the dialog flow of your Amazon Lex bot. Amazon Lex console provides the dialogAction field, which directs Amazon Lex in the next course of action to take in its interaction with your users. This field describes to Amazon Lex what to expect from a user after it returns a response to the client.The type field indicates the next course of action for the bot. It also determines the other fields that the Lambda function must provide as part of the dialogAction value. There are five types:ElicitIntent: Informs Amazon Lex that the user is expected to respond with an utterance that includes an intent.ElicitSlot: Informs Amazon Lex that the user is expected to provide a slot value in the response.Close: Informs Amazon Lex not to expect a response from the user. For example, "Your pizza order has been placed" does not require a response.ConfirmIntent: Informs Amazon Lex that the user is expected to give a yes or no answer to confirm or deny the current intent.Delegate: Directs Amazon Lex to choose the next course of action based on the bot configurationResolutionNote: The steps in this article use V2 of the Amazon Lex console. If you are currently using V1, choose Switch to the new Lex v2 console in the navigation pane.Lambda response syntax specifies the format in which Amazon Lex expects a response from your Lambda function.Review the following information about the response fields:sessionState – This field is required. It describes the current state of the conversation with the user. The actual contents of the structure depends on the type of dialog action.dialogAction – This field determines the type of action that Amazon Lex should take in response to the Lambda function. The type field is always required. The slotToElicit field is required only when dialogAction.type is ElicitSlot.intent – The name of the intent that Amazon Lex uses. This field is not required when dialogAction.type is Delegate or ElicitIntent.state – This field is required. The state can only be ReadyForFulfillment if dialogAction.type is Delegate.messages – This field is required if dialogAction.type is ElicitIntent. It describes one or more messages that Amazon Lex shows to the customer to perform the next turn of the conversation. If you don't supply messages, then Amazon Lex uses an appropriate message that you defined when the bot was created. For more information, see the Message data type.contentType – This describes the type of message to use.content – If the message type is PlainText, CustomPayload, or SSML, then this field contains the message to send to the user.imageResponseCard – If the message type is ImageResponseCard, then this field contains the definition of the response card to show to the user. For more information, see the ImageResponseCard data type.Change dialog flow using the ElicitSlot typeTo change dialog flow using ElicitSlot type, pass the response from the Lambda code hook in this format:{ "sessionState": { "dialogAction": { "slotToElicit": "<slot-name-to-elicit>", "type": "ElicitSlot" }, "intent": { "name": "<intent-name-to-elicit>", "state": "<state>" } }}After returning the response, a slot called slot-name-to-elicit belonging to the intent-name-to-elicit intent is elicited by Amazon Lex.Change dialog flow using the ElicitIntent typeTo change dialog flow using ElicitIntent type, pass the response from the Lambda code hook in this format: { "sessionState": { "dialogAction": { "type": "ElicitIntent" } }, "messages": [{ "contentType": "<content-type>", "content": "<message>" }] }After returning this response, the message specified in the message placeholder is displayed to the user. The next user input is taken as an intent utterance, which invokes the intent with the highest nluConfidence score.Related informationResponse formatamazon-lex-v2-dialogationFollow" | https://repost.aws/knowledge-center/lex-dialogflow-fulfillment-lambda |
How do I benchmark network throughput between Amazon EC2 Linux instances in the same Amazon VPC? | I want to measure the network bandwidth between Amazon Elastic Compute Cloud (Amazon EC2) Linux instances in the same Amazon Virtual Private Cloud (Amazon VPC). How can I do that? | "I want to measure the network bandwidth between Amazon Elastic Compute Cloud (Amazon EC2) Linux instances in the same Amazon Virtual Private Cloud (Amazon VPC). How can I do that?Short descriptionHere are some factors that can affect Amazon EC2 network performance when the instances are in the same Amazon VPC:The physical proximity of the EC2 instances. Instances in the same Availability Zone are geographically closest to each other. Instances in different Availability Zones in the same Region, instances in different Regions on the same continent, and instances in different Regions on different continents are progressively farther away from one another.The EC2 instance maximum transmission unit (MTU). The MTU of a network connection is the largest permissible packet size (in bytes) that your connection can pass. All EC2 instances types support 1500 MTU. All current generation Amazon EC2 instances support jumbo frames. In addition, the previous generation instances, C3, G2, I2, M3, and R3 also use jumbo frames. Jumbo frames allow more than 1500 MTU. However, there are scenarios where your instance is limited to 1500 MTU even with jumbo frames. For more information, see Jumbo frames (9001 MTU).The size of your EC2 instance. Larger instance sizes for an instance type typically provide better network performance than smaller instance sizes of the same type. For more information, see Amazon EC2 instance types.Amazon EC2 enhanced networking support for Linux, except for T2 and M3 instance types. For more information, see Enhanced networking on Linux. For information on enabling enhanced networking on your instance, see How do I enable and configure enhanced networking on my EC2 instances?Amazon EC2 high performance computing (HPC) support that uses placement groups. HPC provides full-bisection bandwidth and low latency, with support for up to 100-gigabit network speeds, depending on the instance type. To review network performance for each instance type, see Amazon Linux AMI instance type matrix. For more information, see Launch instances in a placement group.The instance uses a network I/O credit mechanism to allocate network bandwidth. Instances designated with a **†**symbol in the Network performance column in General purpose instances - Network performance can reach the designated maximum network performance. However, these instances use a network I/O credit mechanism to allocate bandwidth to instances based on average bandwidth utilization. So, network performance varies for these instances.Because of these factors, you might experience significant network performance differences between different cloud environments. It's a best practice to regularly evaluate and baseline the network performance of your environment to improve application performance. Testing network performance provides valuable insight for determining the EC2 instance types, sizes, and configurations that best suit your needs. You can run network performance tests on any combination of instances you choose.For more information, open an AWS Support case and ask for additional network performance specifications for the specific instance types that you're interested in.ResolutionBefore beginning benchmark tests, launch and configure your EC2 Linux instances:1. Launch two Linux instances that you can run network performance testing from.2. Verify that the instances support enhanced networking for Linux, and that they are in the same Amazon VPC.3. (Optional) If you're performing network testing between instances that don't support jumbo frames, then follow the steps in Network maximum transmission unit (MTU) for your EC2 instance to check and set the MTU on your instance.4. Connect to the instances to verify that you can access the instances.Install the iperf network benchmark tool on both instancesIn some distros, such as Amazon Linux, iperf is part of the Extra Packages for Enterprise Linux (EPEL) repository. To enable the EPEL repository, see How do I enable the EPEL repository for my Amazon EC2 instance running CentOS, RHEL, or Amazon Linux?Note: The command iperf refers to version 2.x. The command iperf3 refers to version 3.x. Use version 2.x when benchmarking EC2 instances with high throughput because version 2.x provides multi-thread support. Although version 3.x also supports parallel streams using the -P flag, version 3.x is single-threaded and limited by a single CPU. Due to this, version 3.x requires multiple processes running in parallel to drive the necessary throughput on bigger EC2 instances. For more information, see iperf2/iperf3 on the ESnet website.Connect to your Linux instances, and then run the following commands to install iperf.To install iperf on RHEL 6 Linux hosts, run the following command:# yum -y install https://dl.fedoraproject.org/pub/archive/epel/6/x86_64/epel-release-6-8.noarch.rpm && yum -y install iperfTo install iperf on RHEL 7 Linux hosts, run the following command:# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && yum -y install iperfTo install iperf on Debian/Ubuntu hosts, run the following command:# apt-get install -y iperfTo install iperf on CentOS 6/7 hosts, run the following command:# yum -y install epel-release && yum -y install iperfTest TCP network performance between the instancesBy default, iperf communicates over port 5001 when testing TCP performance. However, you can configure that port by using the -p switch. Be sure to configure your security groups to allow communication over the port that iperf uses.1. Configure one instance as a server to listen on the default port, or specify an alternate listener port with the -p switch. Replace 5001 with your port, if different:$ sudo iperf -s [-p 5001]2. Configure a second instance as a client, and run a test against the server with the desired parameters. For example, the following command initiates a TCP test against the specified server instance with 40 parallel connections:$ iperf -c 172.31.30.41 --parallel 40 -i 1 -t 2Note: For a bidirectional test with iperf (version 2), use the -r option on the client side.Using these specified iperf parameters, the output shows the interval per client stream, the data transferred per client stream, and the bandwidth used by each client stream. The following iperf output shows test results for two c5n.18xlarge EC2 Linux instances launched in a cluster placement group. The total bandwidth transmitted across all connections is 97.6 Gbits/second:------------------------------------------------------------------------------------Client connecting to 172.31.30.41, TCP port 5001TCP window size: 975 KByte (default)------------------------------------------------------------------------------------[ 8] local 172.31.20.27 port 49498 connected with 172.31.30.41 port 5001[ 38] local 172.31.20.27 port 49560 connected with 172.31.30.41 port 5001[ 33] local 172.31.20.27 port 49548 connected with 172.31.30.41 port 5001[ 40] local 172.31.20.27 port 49558 connected with 172.31.30.41 port 5001[ 36] local 172.31.20.27 port 49554 connected with 172.31.30.41 port 5001[ 39] local 172.31.20.27 port 49562 connected with 172.31.30.41 port 5001...[SUM] 0.0- 2.0 sec 22.8 GBytes 97.6 Gbits/secTest UDP network performance between the instancesBy default, iperf communicates over port 5001 when testing UDP performance. However, the port that you use is configurable using the -p switch. Be sure to configure your security groups to allow communication over the port that iperf uses.Note: The default for UDP is 1 Mbit per second unless you specify a different bandwidth.1. Configure one instance as a server to listen on the default UDP port, or specify an alternate listener port with the -p switch. Replace 5001 with your port, if different:$ sudo iperf -s -u [-p 5001]2. Configure a second instance as a client, and then run a test against the server with the desired parameters. The following example initiates a UDP test against the specified server instance with the -b parameter set to 5g.The -b parameter changes the bandwidth to 5g from the UDP default of 1 Mbit per second. 5g is the maximum network performance a c5n18xlarge instance can provide for a single traffic flow within a VPC. For more information, see New C5n instances with 100 Gpbs networking.Note: UDP is connectionless and doesn't have the congestion control algorithms that TCP has. When testing with iperf, the bandwidth obtained with UDP might be lower than the bandwidth obtained with TCIP.# iperf -c 172.31.1.152 -u -b 5gThe output shows the interval (time), the amount of data transferred, the bandwidth achieved, the jitter (the deviation in time for the periodic arrival of data grams), and the loss/total of UDP datagrams:$ iperf -c 172.31.30.41 -u -b 5g------------------------------------------------------------------------------------Client connecting to 172.31.30.41, UDP port 5001Sending 1470 byte datagrams, IPG target: 2.35 us (kalman adjust)UDP buffer size: 208 KByte (default)------------------------------------------------------------------------------------[ 3] local 172.31.20.27 port 39022 connected with 172.31.30.41 port 5001[ ID] Interval Transfer Bandwidth[ 3] 0.0-10.0 sec 5.82 GBytes 5.00 Gbits/sec[ 3] Sent 4251700 datagrams[ 3] Server Report:[ 3] 0.0-10.0 sec 5.82 GBytes 5.00 Gbits/sec 0.003 ms 1911/4251700 (0.045%)[ 3] 0.00-10.00 sec 1 datagrams received out-of-orderRelated informationESnet - disk testing using iperf3ESnet - network tuningThroughput tool comparisoniperf2 at SourceforgeFollow" | https://repost.aws/knowledge-center/network-throughput-benchmark-linux-ec2 |
How do I resolve the "Internal Failure" error when I try to create or update a stack in CloudFormation? | I want to resolve the "Internal Failure" error in AWS CloudFormation. | "I want to resolve the "Internal Failure" error in AWS CloudFormation.Short descriptionIf you're creating or updating your CloudFormation stack, you can receive an "Internal Failure" error when an operation on a resource fails. You can also receive this error if your stack fails to deploy.An operation on a resource can fail in the following scenarios:Your resources or properties are set to incorrect values. To resolve this issue, complete the steps in the Deploy a test stack to find the incorrect values for your resources or properties section.An internal workflow failed. To resolve this issue using AWS CloudTrail, complete the steps in the Find the failed API operations in your CloudTrail event logs section.Finally, your stack can fail to deploy if you pass incorrect values to the Outputs section of your CloudFormation template. To resolve this error, complete the steps in the Check the values in the Outputs section of your CloudFormation template section.Note: The following steps apply to only "Internal Failure" errors that you receive when you try to create or update a stack in CloudFormation.ResolutionDeploy a test stack to find the incorrect values for your resources or propertiesTo find the incorrect values for your resource properties or attributes, deploy a test stack with a CloudFormation template that includes only the failed resource.If your test stack deploys successfully, follow the steps in the Find the failed API operations in your CloudTrail event logs section.If your test stack deployment fails, continue to remove non-required properties and attributes from the test stack until you find the incorrect values.In the following example scenario, you receive an "Internal Failure" error when CloudFormation tries to create an AWS::Config::ConformancePack resource with AWS Config. You receive an error because the DeliveryS3Bucket property uses incorrect syntax. The DeliveryS3Bucket property accepts only a bucket name as a value (for example: bucketname). A file path that includes the bucket name isn't an acceptable value (for example: s3://bucketname).AWSTemplateFormatVersion: 2010-09-09Resources: CloudFormationCanaryPack: Type: AWS::Config::ConformancePack Properties: ConformancePackName: ConformancePackName DeliveryS3Bucket: s3://bucketname # Incorrect value for DeliveryS3Bucket TemplateS3Uri: s3://bucketname/prefixFind the failed API operations in your CloudTrail event logs1. Open the CloudTrail console.2. In the navigation pane, choose Event history.3. For Time range, enter a time range to isolate the failed API call, and then choose Apply.Tip: For the From time, enter the time when the resource entered the CREATE_IN_PROGRESS or UPDATE_IN_PROGRESS status in your CloudFormation stack. For the To time, enter the time when the API call failed.4. To search beyond the default display of events in Event history, use attribute filters.Note: By default, Event history uses a Read-only filter that's set to false. The Read-only filter result shows only write events for API activity and excludes read-only events from the list of displayed events.You can use EventName to filter by the name of the returned event. If you know the API action used to create or update a resource, then you can use an EventName filter for specific API calls only. For example, the CloudFormation stack uses the AWS Config API action PutConformancePack when it creates an AWS::Config::ConformancePack resource. That means you can filter for the PutConformancePack API only. You can use EventSource to filter by the AWS service that made the API request. That means you can scroll through a list of event sources and choose the appropriate resource used in your CloudFormation template.5. To identify the root cause of the failure, review the error message for the returned event.Note: Some API operation failures require you to update your original CloudFormation template, and then perform a test deployment to confirm that the error is resolved.Check the values in the Outputs section of your CloudFormation templateIn your CloudFormation template, confirm that the values in the Outputs section don't contain syntax errors. For example, remove any trailing spaces.If you retrieve resource attributes with dynamic references, you must confirm that the attributes are available during stack deployment. To simulate this outside of CloudFormation, do the following:1. Make a Create* or Update* API call to the resource type with the failed attribute (to create or modify).2. Make a Describe* API call to retrieve current attributes of the resource during the stack creation or update process.The following example scenario demonstrates an internal error returned by a stack when the ReplicationInstancePrivateIpAddresses attribute of the AWS::DMS::ReplicationInstance resource is passed to Outputs.In the following example, the instance's private IP attribute is available only after the ReplicationInstance resource has switched its status to available. If the ReplicationInstance resource isn't in the available status by the time the stack processes Outputs, CloudFormation can't retrieve the private IP attribute. Then, the deployment fails.AWSTemplateFormatVersion: 2010-09-09Resources: BasicReplicationInstance: Type: AWS::DMS::ReplicationInstance Properties: ReplicationInstanceClass: dms.t2.smallOutputs: DmsInstanceIP: Value: !GetAtt BasicReplicationInstance.ReplicationInstancePrivateIpAddressesRelated informationTroubleshooting AWS CloudFormationViewing events with CloudTrail Event historyFollow" | https://repost.aws/knowledge-center/cloudformation-internal-failure-error |
How do I set special parameters in an AWS Glue job using AWS CloudFormation? | "I want to enable special parameters, such as --enable-metrics, for my job in AWS Glue. However, I get a template validation or "null values" error from AWS CloudFormation when I try to run my job. How do I resolve these errors?" | "I want to enable special parameters, such as --enable-metrics, for my job in AWS Glue. However, I get a template validation or "null values" error from AWS CloudFormation when I try to run my job. How do I resolve these errors?Short descriptionTo set special parameters for your job in AWS Glue, you must supply a key-value pair for the DefaultArguments property of the AWS::Glue::Job resource in CloudFormation. If you supply a key only in your job definition, then CloudFormation returns a validation error.Resolution1. In your CloudFormation template, set the value of your special parameter to an empty string for the DefaultArguments property of your job definition.JSON:"MyJob": { "Type": "AWS::Glue::Job", "Properties": { "Command": { "Name": "glueetl", "ScriptLocation": "s3://my-test//test-job1" }, "DefaultArguments": { "--job-bookmark-option": "job-bookmark-enable", "--enable-metrics": "" }, "ExecutionProperty": { "MaxConcurrentRuns": 2 }, "MaxRetries": 0, "Name": "cf-job3", "Role": { "Ref": "MyJobRole" } }}YAML:MyJob: Type: 'AWS::Glue::Job' Properties: Command: Name: glueetl ScriptLocation: 's3://my-test//test-job1' DefaultArguments: '--job-bookmark-option': job-bookmark-enable '--enable-metrics': '' ExecutionProperty: MaxConcurrentRuns: MaxRetries: 0 Name: cf-job3 Role: !Ref MyJobRoleNote: In the preceding example JSON and YAML templates, the value of --enable-metrics is set to an empty string. The empty string validates the template and launches the resource that's configured with the special parameter.2. To activate your special parameter, run your job.Follow" | https://repost.aws/knowledge-center/cloudformation-glue-special-parameters |
How do I turn on Container Insights metrics on an EKS cluster? | I want to configure Amazon CloudWatch Container Insights to see my Amazon Elastic Kubernetes Service (Amazon EKS) cluster metrics. | "I want to configure Amazon CloudWatch Container Insights to see my Amazon Elastic Kubernetes Service (Amazon EKS) cluster metrics.Short descriptionWhen used with Amazon EKS, Container Insights uses a containerized version of the CloudWatch agent to find all the containers running in a cluster. Container Insights also uses AWS Distro for OpenTelemetry (ADOT) Collector to find containers in a cluster. Then, it collects performance data at every layer of the performance stack, such as performance log events that use an embedded metric format. Afterward, it sends this data to CloudWatch Logs under the /aws/containerinsights/cluster-name/performance log group where CloudWatch creates aggregated metrics at the cluster, node, and pod levels. Container Insights also supports collecting metrics from clusters that are deployed on AWS Fargate for Amazon EKS. For more information, see Using Container Insights.Note: Container Insights is supported only on Linux instances. Amazon provides a CloudWatch agent container image on Amazon Elastic Container Registry (Amazon ECR). For more information, see cloudwatch-agent on Amazon ECR.ResolutionPrerequisitesBefore starting, review the following prerequisites:Make sure that your Amazon EKS cluster is running with nodes in the Ready state and the kubectl command is installed and running.Make sure that the AWS Identity and Access Management (IAM) managed CloudWatchAgentServerPolicy activates your Amazon EKS worker nodes to send metrics and logs to CloudWatch. To activate your worker nodes, attach a policy to the worker nodes' IAM role. Or, use an IAM role for service accounts for the cluster, and attach the policy to this role. For more information, see IAM roles for service accounts.Make sure that you're running a cluster that supports Kubernetes version 1.18 or higher. This is a requirement of Container Insights for EKS Fargate. Also, make sure that you define a Fargate profile to schedule pods on Fargate.Make sure that the Amazon EKS pod execution IAM role allows components that run on the Fargate infrastructure to make calls to AWS APIs on your behalf. For example, pulling container images from Amazon ECR.Set up Container Insights metrics on your EKS EC2 cluster using the CloudWatch agentThe CloudWatch agent or ADOT creates a log group that's named aws/containerinsights/Cluster_Name/performance and sends the performance log events to this log group.When setting up Container Insights to collect metrics, you must deploy the CloudWatch agent container image as a DaemonSet from Docker Hub. By default, this is done as an anonymous user. This pull might be subject to a rate limit.1. If you don't have a namespace called amazon-cloudwatch, then create one:kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cloudwatch-namespace.yaml2. Create a service account for the CloudWatch agent that's named cloudwatch-agent:kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cwagent/cwagent-serviceaccount.yaml3. Create a configmap as a configuration file for the CloudWatch agent:ClusterName=<my-cluster-name>curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cwagent/cwagent-configmap.yaml | sed 's/cluster_name/'${ClusterName}'/' | kubectl apply -f -Note: Replace my-cluster-name with the name of your EKS cluster. To further customize the CloudWatch agent configuration, see Create a ConfigMap for the CloudWatch agent.4. Deploy the cloudwatch-agent DaemonSet:kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cwagent/cwagent-daemonset.yamlOptional: To pull the CloudWatch agent from the Amazon Elastic Container Registry, patch the cloudwatch-agent DaemonSet:kubectl patch ds cloudwatch-agent -n amazon-cloudwatch -p \ '{"spec":{"template":{"spec":{"containers":[{"name":"cloudwatch-agent","image":"public.ecr.aws/cloudwatch-agent/cloudwatch-agent:latest"}]}}}}'Note: The Cloudwatch-agent Docker image on Amazon ECR supports the ARM and AMD64 architectures. Replace the latest image tag based on the image version and architecture. For more information, see images tags cloudwatch-agent on Amazon ECR.5. For IAM roles for service accounts, create an OIDC provider and an IAM role and policy. Then, associate the IAM role to the cloudwatch-agent service account:kubectl annotate serviceaccounts cloudwatch-agent -n amazon-cloudwatch "eks.amazonaws.com/role-arn=arn:aws:iam::ACCOUNT_ID:role/IAM_ROLE_NAME"Note: Replace ACCOUNT_ID with your account ID and IAM_ROLE_NAME with the IAM role that you use for the service accounts.Troubleshoot the CloudWatch agent1. Run the following command to retrieve the list of pods:kubectl get pods -n amazon-cloudwatch2. Run the following command to check the events at the bottom of the output:kubectl describe pod pod-name -n amazon-cloudwatch3. Runt the following command to check the logs:kubectl logs pod-name -n amazon-cloudwatch4. If you see a CrashLoopBackOff error for the CloudWatch agent, then make sure that your IAM permissions are set correctly.For more information, see Verify prerequisites.Delete the CloudWatch AgentTo delete the Cloudwatch agent, run the following command:kubectl delete -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cloudwatch-namespace.yamlNote: Deleting the namespace also deletes the associated resources.Set up Container Insights metrics on your EKS EC2 cluster using ADOT1. Run the following command to deploy the ADOT Collector as a DaemonSet:curl https://raw.githubusercontent.com/aws-observability/aws-otel-collector/main/deployment-template/eks/otel-container-insights-infra.yaml | kubectl apply -f -For more customizations, see Container Insights EKS infrastructure metrics.2. Run the following command to confirm that the collector is running:kubectl get pods -l name=aws-otel-eks-ci -n aws-otel-eks3. Optional: By default, the aws-otel-collector image is pulled from Docker Hub as an anonymous user. This pull might be subject to a rate limit. To pull the aws-otel-collector docker image on Amazon ECR, patch aws-otel-eks-ci DaemonSet:kubectl patch ds aws-otel-eks-ci -n aws-otel-eks -p \'{"spec":{"template":{"spec":{"containers":[{"name":"aws-otel-collector","image":"public.ecr.aws/aws-observability/aws-otel-collector:latest"}]}}}}'Note: The Cloudwatch-agent Docker image on Amazon ECR supports the ARM and AMD64 architectures. Replace the latest image tag based on the image version and architecture. For more information, see images tags cloudwatch-agent on Amazon ECR.5. Optional: For IAM roles for service accounts, create an OIDC provider and an IAM role and policy. Then, associate the IAM role to the aws-otel-sa service account.kubectl annotate serviceaccounts aws-otel-sa -n aws-otel-eks "eks.amazonaws.com/role-arn=arn:aws:iam::ACCOUNT_ID:role/IAM_ROLE_NAME"Note: Replace ACCOUNT_ID with your account ID and IAM_ROLE_NAME with the IAM role that you use for the service accounts.Delete ADOTTo delete ADOT, run the following command:curl https://raw.githubusercontent.com/aws-observability/aws-otel-collector/main/deployment-template/eks/otel-container-insights-infra.yaml |kubectl delete -f -Set up Container Insights metrics on an EKS Fargate cluster using ADOTFor applications that run on Amazon EKS and AWS Fargate, you can use ADOT to set up Container Insights. EKS Fargate networking architecture doesn’t allow pods to directly reach the kubelet on the worker to retrieve resource metrics. The ADOT Collector calls the Kubernetes API server to proxy the connection to the kubelet on a worker node. It then collects kubelet’s advisor metrics for workloads on that node.Note: A single instance of ADOT Collector isn't sufficient to collect resource metrics from all the nodes in a cluster.The ADOT Collector sends the following metrics to CloudWatch for every workload that runs on EKS Fargate:pod_cpu_utilization_over_pod_limitpod_cpu_usage_totalpod_cpu_limitpod_memory_utilization_over_pod_limitpod_memory_working_setpod_memory_limitpod_network_rx_bytespod_network_tx_bytesEach metric is associated with the following dimension sets and collected under the CloudWatch namespace that's named ContainerInsights:ClusterName, LaunchTypeClusterName, Namespace, LaunchTypeClusterName, Namespace, PodName, LaunchTypeFor more details, see Container Insights EKS Fargate.To deploy ADOT in your EKS Fargate, complete the following steps:1. Associate a Kubernetes service account with an IAM role. Create an IAM role that's named EKS-ADOT-ServiceAccount-Role that's associated with a Kubernetes service account that's named adot-collector. The following helper script requires eksctl:#!/bin/bashCLUSTER_NAME=YOUR-EKS-CLUSTER-NAMEREGION=YOUR-EKS-CLUSTER-REGIONSERVICE_ACCOUNT_NAMESPACE=fargate-container-insightsSERVICE_ACCOUNT_NAME=adot-collectorSERVICE_ACCOUNT_IAM_ROLE=EKS-Fargate-ADOT-ServiceAccount-RoleSERVICE_ACCOUNT_IAM_POLICY=arn:aws:iam::aws:policy/CloudWatchAgentServerPolicyeksctl utils associate-iam-oidc-provider \--cluster=$CLUSTER_NAME \--approveeksctl create iamserviceaccount \--cluster=$CLUSTER_NAME \--region=$REGION \--name=$SERVICE_ACCOUNT_NAME \--namespace=$SERVICE_ACCOUNT_NAMESPACE \--role-name=$SERVICE_ACCOUNT_IAM_ROLE \--attach-policy-arn=$SERVICE_ACCOUNT_IAM_POLICY \--approveNote: Replace CLUSTER_NAME with your cluster name and REGION with your AWS Region.2. Run the following command to deploy the ADOT Collector as a Kubernetes StatefulSet:ClusterName=<my-cluster-name>Region=<my-cluster-region>curl https://raw.githubusercontent.com/aws-observability/aws-otel-collector/main/deployment-template/eks/otel-fargate-container-insights.yaml | sed 's/YOUR-EKS-CLUSTER-NAME/'${ClusterName}'/;s/us-east-1/'${Region}'/' | kubectl apply -f -Note: Make sure that you have a matching fargate profile to provision the StatefulSet pods on AWS Fargate. Replace my-cluster-name with your cluster's name and my-cluster-region with the Region that your cluster is located in.3. Run the following command to verify that the ADOT Collector pod is running:kubectl get pods -n fargate-container-insights4. Optional: By default, the aws-otel-collector image is pulled from Docker Hub as an anonymous user. This pull might be subject to a rate limit. To pull the aws-otel-collector Docker image on Amazon ECR, patch adot-collector StatefulSet:kubectl patch sts adot-collector -n fargate-container-insights -p \'{"spec":{"template":{"spec":{"containers":[{"name":"adot-collector","image":"public.ecr.aws/aws-observability/aws-otel-collector:latest"}]}}}}'Delete ADOTTo delete ADOT, run the following command:eksctl delete iamserviceaccount —cluster CLUSTER_NAME —name adot-collectorClusterName=<my-cluster-name>Region=<my-cluster-region>curl https://raw.githubusercontent.com/aws-observability/aws-otel-collector/main/deployment-template/eks/otel-fargate-container-insights.yaml | sed 's/YOUR-EKS-CLUSTER-NAME/'${ClusterName}'/;s/us-east-1/'${Region}'/' | kubectl delete -f -Related informationUsing Container InsightsFollow" | https://repost.aws/knowledge-center/cloudwatch-container-insights-eks-fargate |
How do I install .NET Framework 3.5 on an EC2 Windows instance that doesn't have internet access? | "I want to use .NET Framework 3.5 on my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance, but my instance doesn't have internet access.When I try to install .NET Framework 3.5 using the Add Roles and Features wizard, I get an error similar to the following:"Do you need to specify an alternate source path? One or more installation selections are missing source files on the destination."How do I install .NET Framework 3.5 on my Amazon EC2 Windows instance when my instance doesn't have internet access?" | "I want to use .NET Framework 3.5 on my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance, but my instance doesn't have internet access.When I try to install .NET Framework 3.5 using the Add Roles and Features wizard, I get an error similar to the following:"Do you need to specify an alternate source path? One or more installation selections are missing source files on the destination."How do I install .NET Framework 3.5 on my Amazon EC2 Windows instance when my instance doesn't have internet access?ResolutionIf your instance doesn't have internet access, AWS provides public Amazon Elastic Block Store (Amazon EBS) snapshots that include these extra files.Follow these steps:Find and attach the correct Amazon EBS volume for your instance using the Amazon EC2 console, Windows PowerShell, or the AWS Command Line Interface (AWS CLI). For instructions, see Adding Windows components using installation media.Bring the new volume online by making an Amazon EBS volume available for use on Windows.Install .NET Framework 3.5 using the Add Roles and Features wizard, or using Windows PowerShell.To confirm if you have successfully installed .NET Framework 3.5, see How to: Determine which .NET Framework versions are installed.Related informationHow do I troubleshoot Remote Desktop connection issues to my Amazon EC2 Windows instance?.NET Framework 3.5 installation error: 0x800F0906, 0x800F081F, 0x800F0907Follow" | https://repost.aws/knowledge-center/net-framework-windows |
How can I send memory and disk metrics from my EC2 instances to CloudWatch? | I want to send memory and disk metrics from my Amazon Elastic Compute Cloud (Amazon EC2) instances to Amazon CloudWatch Metrics. How can I do this? | "I want to send memory and disk metrics from my Amazon Elastic Compute Cloud (Amazon EC2) instances to Amazon CloudWatch Metrics. How can I do this?Short descriptionBy default, Amazon EC2 delivers a set of metrics related to your instance to CloudWatch in the AWS/EC2 namespace. This includes CPUUtilization, a set of disk Read and Write metrics, and a set of NetworkIn and NetworkOut metrics. But, EC2 doesn't provide metrics related to OS-level memory usage or disk usage metrics.To find these metrics and deliver them to CloudWatch as custom metrics, install the Unified CloudWatch Agent. Then, define these metrics in the Agent configuration file.Important: Custom metrics are charged according to their storage and API use.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.You can download and install the CloudWatch agent manually using the AWS CLI or you can integrate it with AWS Systems Manager Agent (SSM Agent). The CloudWatch agent is supported on both Linux and Windows systems. Use these steps to install the CloudWatch agent:1. Create IAM roles or users that enable the agent to collect metrics from the server and, optionally, integrate with AWS Systems Manager. Attach this IAM role to the EC2 instance that you want to install the agent on.2. Download the agent package and install the agent package.3. Create the CloudWatch agent configuration file and specify the metrics that you want to collect.This example shows a basic agent configuration file that reports memory usage and disk usage metrics on a Linux system:{ "metrics": { "metrics_collected": { "mem": { "measurement": [ "mem_used_percent" ] }, "disk": { "measurement": [ "used_percent" ], "resources": [ "*" ] } }, "append_dimensions": { "InstanceId": "${aws:InstanceId}" } }}This is an example of a basic agent configuration file for Windows systems:{ "metrics": { "metrics_collected": { "LogicalDisk": { "measurement": [ "% Free Space" ], "resources": [ "*" ] }, "Memory": { "measurement": [ "% Committed Bytes In Use" ] } }, "append_dimensions": { "InstanceId": "${aws:InstanceId}" } }}4. Start the agent on your EC2 instance.When the agent is running, it reports metrics from your instance to the CWAgent namespace within CloudWatch, by default. If you experience issues, see Troubleshooting the CloudWatch agent.Related informationMonitor your instances using CloudWatchFollow" | https://repost.aws/knowledge-center/cloudwatch-memory-metrics-ec2 |
What's the difference between the logout endpoint and the GlobalSignOut API in Amazon Cognito? | I want to understand how to use the logout endpoint and the GlobalSignOut API in Amazon Cognito. | "I want to understand how to use the logout endpoint and the GlobalSignOut API in Amazon Cognito.Short descriptionThe Amazon Cognito logout endpoint clears a user session from a browser. The GlobalSignOut API invalidates all the access and refresh tokens that are issued to a specific user.ResolutionSign out users with the logout endpointWhen you use a hosted endpoint for user authentication, Amazon Cognito stores a cookie named "cognito" in your browser. The cookie is associated with the Amazon Cognito domain that's configured with your user pool. The cookie is valid for 1 hour. When a user tries to sign in again during an active session, Amazon Cognito asks the user if they want to continue their existing session. This allows the user to sign in without providing credentials. If a user chooses the Sign in as example_username button to use an existing session, then the cookie's validity resets to 1 hour.When a user visits the logout endpoint in their browser, Amazon Cognito clears the session cookie. The user must provide their credentials to sign in again.When a user signs in with third-party identity providers (IdPs), there's an extra step to perform. If a user signs in using one of the third-party IdPs, then visiting the logout endpoint clears the "cognito" cookie from the browser. However, the IdP can still have an active session. Consider the following information when you're clearing out the user's IdP session:Amazon Cognito supports the single logout (SLO) feature for Security Assertion Markup Language version 2.0 (SAML 2.0) IdPs with HTTP POST Binding. If your provider accepts HTTP POST Binding on its SLO endpoint, then consider implementing SLO for SAML IdPs. If a user visits the logout endpoint with SLO turned on, then Amazon Cognito sends a signed logout request to the SAML IdP. Then, the SAML IdP clears the IdP session.For social and OpenID Connect (OIDC) IdPs, you must create a custom workflow to clear the IdP session from the browser.Sign out users with the GlobalSignOut APIWhen you use the GlobalSignOut API, Amazon Cognito revokes all the access and refresh tokens that are issued to a user. Note that only Amazon Cognito is informed of the token revocation. Your application might continue to accept the tokens until they expire.Your application can use both the GlobalSignOut and AdminUserGlobalSignOut APIs to globally sign out users. When your application uses REST APIs for Amazon Cognito user authentication, you must use these APIs to sign out users.When the application tries to use a revoked token, Amazon Cognito raises an error indicating that you revoked the refresh token. The user must sign in again to get a new set of JSON Web Tokens (JWTs).You can configure the expiration time for your access and ID tokens in your user pool app client. You can change the expiration time to a value between 5 minutes and 24 hours.Follow" | https://repost.aws/knowledge-center/cognito-logout-endpoint-globalsignoutapi |
How do I encrypt an HBase table in Amazon EMR using AES? | I want to use the Advanced Encryption Standard (AES) to encrypt an Apache HBase table on an Amazon EMR cluster. | "I want to use the Advanced Encryption Standard (AES) to encrypt an Apache HBase table on an Amazon EMR cluster.ResolutionYou can encrypt a new or existing HBase table using the transparent encryption feature. This feature encrypts HFile data and write-ahead logs (WAL) at rest.Note: When you use Amazon Simple Storage Service (Amazon S3) as the data source rather than HDFS, you can protect data at rest and in transit using server-side and client-side encryption. For more information, see Protecting data using encryption.Encrypt a new HBase table1. Open the Amazon EMR console.2. Choose a cluster that already has HBase, or create a new cluster with HBase.3. Connect to the master node using SSH.4. Use the keytool command to create a secret key of appropriate length for AES Encryption. Provide a password and alias.Example command:sudo keytool -keystore /etc/hbase/conf/hbase.jks -storetype jceks -storepass:file mysecurefile -genseckey -keyalg AES -keysize 128 -alias your-alias<br>Note: The file: securefile contains a storepass password. Ensure that the file is readable only by the file owner and is deleted after use.Example output:Output:Enter key password for <your_key_store> (RETURN if same as keystore password):Warning:The JCEKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/hbase/conf/hbase.jks -destkeystore /etc/hbase/conf/hbase.jks -deststoretype pkcs12".5. Add the following properties to the hbase-site.xml file on each node in the EMR cluster. In the hbase.crypto.keyprovider.parameters property, provide the path to hbase.jks and the password. This is the same password that you specified in the keytool command in step 4. In the hbase.crypto.master.key.name property, specify your alias.<property> <name>hbase.crypto.keyprovider.parameters</name> <value>jceks:///etc/hbase/conf/hbase.jks?password=your_password</value> </property> <property> <name>hbase.crypto.master.key.name</name> <value><your-alias></value> </property> <property> <name>hbase.regionserver.hlog.reader.impl</name> <value>org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader</value> </property> <property> <name>hbase.regionserver.hlog.writer.impl</name> <value>org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter</value> </property> <property> <name>hfile.format.version</name> <value>3</value> </property> <property> <name>hbase.regionserver.wal.encryption</name> <value>true</value> </property> <property> <name>hbase.crypto.keyprovider</name> <value>org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider</value> </property>6. Copy the hbase.jks file to all cluster nodes. Be sure to copy the file to the location that's specified in the hbase.crypto.keyprovider.parameters property. In the following example, replace HostToCopy and ToHost with the corresponding public DNS names for the nodes.cd /etc/hbase/confscp hbase.jks HostToCopy:/tmpssh ToHostsudo cp /tmp/hbase.jks /etc/hbase/conf/7. Restart all HBase services on the master and core nodes, as shown in the following example. Repeat the hbase-regionserver stop and start commands on each core node.Note: Stopping and starting Region servers might impact ongoing reads/writes to HBase tables on your cluster. Therefore, stop and start the HBase daemons only during downtime. Verify possible impacts on a test cluster before starting and stopping a production cluster.Amazon EMR 5.30.0 and later release versions:sudo systemctl stop hbase-mastersudo systemctl stop hbase-regionserversudo systemctl start hbase-mastersudo systemctl start hbase-regionserverAmazon EMR 4x to Amazon EMR 5.29.0 release versions:sudo initctl stop hbase-mastersudo initctl stop hbase-regionserversudo initctl start hbase-mastersudo initctl start hbase-regionserver8. Log in to the HBase shell:# hbase shell9. Create a table with AES encryption:create 'table1',{NAME=>'columnfamily',ENCRYPTION=>'AES'}Example output:0 row(s) in 1.6760 seconds=> Hbase::Table - table110. Describe the table to confirm that AES encryption is enabled:describe 'table1'Example output:Table table1 is ENABLEDtable1COLUMN FAMILIES DESCRIPTION{NAME => 'columnfamily', BLOOMFILTER => 'ROW', ENCRYPTION => 'AES', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE',DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}1 row(s) in 0.0320 secondsEncrypt an existing table1. Describe the unencrypted table:describe 'table2'Example output:Table table2 is ENABLEDtable2COLUMN FAMILIES DESCRIPTION{NAME => 'cf2', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}1 row(s) in 0.0140 seconds2. Use the alter command to enable AES encryption:alter 'table2',{NAME=>'cf2',ENCRYPTION=>'AES'}Example output:Updating all regions with the new schema...1/1 regions updated.Done.0 row(s) in 1.9000 seconds3. Confirm that the table is encrypted:describe 'table2'Example output:Table table2 is ENABLEDtable2COLUMN FAMILIES DESCRIPTION{NAME => 'cf2', BLOOMFILTER => 'ROW', ENCRYPTION => 'AES', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE',DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}1 row(s) in 0.0120 secondsNote: If you create a secondary index on the table (for example, with Apache Phoenix), then WAL encryption might not work. When this happens, you get a "java.lang.NullPointerException" response. To resolve this issue, set hbase.regionserver.wal.encryption to false in the hbase-site.xml file. Example:<property> <name>hbase.regionserver.wal.encryption</name> <value>false</value> </property>Related informationUsing the HBase shellTransparent encryption in HDFS on Amazon EMRFollow" | https://repost.aws/knowledge-center/emr-encrypt-hbase-table-aes |
Why wasn't my Lambda function triggered by my EventBridge rule? | "I created an Amazon EventBridge rule using the AWS Command Line Interface (AWS CLI), API, or AWS CloudFormation. However, the target AWS Lambda function is not getting invoked. When I create or update the same EventBridge rule through the AWS Management Console, the rule works correctly. How can I troubleshoot this?" | "I created an Amazon EventBridge rule using the AWS Command Line Interface (AWS CLI), API, or AWS CloudFormation. However, the target AWS Lambda function is not getting invoked. When I create or update the same EventBridge rule through the AWS Management Console, the rule works correctly. How can I troubleshoot this?Short descriptionWhen creating an EventBridge rule with a Lambda function as the target, keep the following in mind:When using the EventBridge console to create the rule, the appropriate permissions are added to the function's resource policy automatically.When using the AWS CLI, SDK, or AWS CloudFormation to create the same rule, you must manually apply the permissions in the resource policy.The permissions grant the Amazon EventBridge service access to invoke the Lambda function.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.Review CloudWatch metrics for the EventBridge ruleOpen the CloudWatch console.From the navigation pane on the left, under Metrics, select All Metrics.Select the AWS/Events namespace.Select Invocations and FailedInvocations metrics for the rule that you're reviewing.Invocation datapoints indicate that the target was invoked by the rule. If FailedInvocations data points are present, then there is an issue invoking the target. FailedInvocations represent a permanent failure and might be the result of incorrect permissions or a misconfiguration of the target.Confirm appropriate permissions in the Lambda function resource-policyOpen the AWS Lambda console.Select the target function.Select the Configuration tab, and then choose Permissions.Under the Resource-based policy section, review the policy document.The following is a sample resource policy that allows EventBridge to invoke the Lambda function:{ "Effect": "Allow", "Action": "lambda:InvokeFunction", "Resource": "arn:aws:lambda:region:account-id:function:function-name", "Principal": { "Service": "events.amazonaws.com" }, "Condition": { "ArnLike": { "AWS:SourceArn": "arn:aws:events:region:account-id:rule/rule-name" } }, "Sid": "InvokeLambdaFunction"}Note: Replace the ARN with the appropriate Region, account ID, and resource name before deploying.Or, you can use the GetPolicy API to retrieve the Lambda function's resource policy.If the existing resource policy doesn't contain the necessary permissions, then update the policy using the preceding steps as reference. You can also update the policy using the AddPermission command in the AWS CLI. The following is an example of the AddPermission command:aws lambda add-permission \--function-name MyFunction \--statement-id MyId \--action 'lambda:InvokeFunction' \--principal events.amazonaws.com \--source-arn arn:aws:events:us-east-1:123456789012:rule/MyRuleAdd an Amazon SQS Dead Letter Queue to the targetEventBridge uses Amazon Simple Queue Service (Amazon SQS) DLQs to store events that couldn't be delivered to a target. An SQS DLQ can be attached to the target reporting FailedInvocations. Events can be retrieved from the DLQ and analyzed to obtain more context on the issue. After remediation is completed, previously failed events can be resent to the target for processing.Open the relevant rule in the EventBridge console.Under Targets, select Edit, and then expand the Additional settings section.Under Dead-letter queue, choose Select an Amazon SQS queue in the current AWS account to use as the dead-letter queue.Select an SQS queue to use as the DLQ.After the DLQ is assigned, complete the remaining steps in the Edit Rule section to save the changes.Related informationMy rule ran but my Lambda function wasn't invokedUsing resource-based policies for Amazon EventBridge: AWS Lambda permissionsEvent retry policy and using dead-letter queuesImproved failure recovery for Amazon EventBridgeFollow" | https://repost.aws/knowledge-center/eventbridge-lambda-not-triggered |
How do I create and activate a new AWS account? | I'm getting started with AWS and I want to create and activate a new AWS account. | "I'm getting started with AWS and I want to create and activate a new AWS account.ResolutionSign up using your email addressOpen the Amazon Web Services (AWS) home page.Choose Create an AWS Account.Note: If you signed in to AWS recently, choose Sign in to the Console. If Create a new AWS account isn't visible, first choose Sign in to a different account, and then choose Create a new AWS account.In Root user email address, enter your email address, edit the AWS account name, and then choose Verify email address. An AWS verification email is sent to this address with a verification code.Tip: For Root user email address, use a corporate email distribution list (for example, it.admins@example.com) or email box if your account is a professional AWS account. Avoid using an individual's corporate email address (for example, paulo.santos@example.com). With this practice, your company can retain access to the AWS account even when an employee changes positions or leaves the company. The email address can be used to reset account credentials. Be sure that you protect access to these distribution lists. Don't use the AWS account root user login for your everyday tasks. It's a best practice to turn on multi-factor authentication (MFA) on the root account to secure your AWS resources.Tip: For AWS Account name, use an account naming standard so that the account name can be recognized in your invoice or Billing and Cost Management console. If it's a company account, then consider using the naming standard of organization-purpose-environment (for example, ExampleCompany-audit-prod). If it's a personal account, consider using the naming standard of first name-last name-purpose (for example, paulo-santos-testaccount). You can change the account name in your account settings after you sign up. For more information, see How do I change the name on my AWS account?Verify your email addressEnter the code that you receive, and then choose Verify. The code might take a few minutes to arrive. Check your email and spam folder for the verification code email.Create your passwordEnter your Root user password and Confirm root user password, and then choose Continue.Add your contact informationSelect Personal or Business.Note: Personal accounts and business accounts have the same features and functions.Enter your personal or business information.Important: For business AWS accounts, it's a best practice to enter the company phone number rather than a personal cell phone number. Configuring a root account with an individual email address or a personal phone number can make your account insecure.Read and accept the AWS Customer Agreement.Choose Continue.You receive an email to confirm that your account is created. You can sign in to your new account using the email address and password that you registered with. However, you can't use AWS services until you finish activating your account.Add a payment methodOn the Billing information page, enter the information about your payment method, and then choose Verify and Add.If you're signing up for an Amazon Web Services India Private Limited (AWS India) account, then you must provide your CVV for the verification process. You might also have to enter a one-time password, depending on your bank. AWS India charges your payment method two Indian Rupees (INR), as part of the verification process. AWS India refunds the two INR after the verification is complete.If you want to use a different billing address for your AWS billing information, choose Use a new address. Then, choose Verify and Continue.Important: You can't proceed with the sign-up process until you add a valid payment method.Verify your phone numberOn the Confirm your identity page, select a contact method to receive a verification code.Select your phone number country or region code from the list.Enter a mobile phone number where you can be reached in the next few minutes.If presented with a CAPTCHA, enter the displayed code, and then submit.Note: To troubleshoot CAPTCHA errors, see What do I do if I receive an error when entering the CAPTCHA to sign in to my AWS account?In a few moments, an automated system contacts you.Enter the PIN that you receive, and then choose Continue.Customer verificationIf you're signing up with a billing or contact address located in India, then you must complete the following steps:Choose the Primary purpose of account registration for your account creation. If your account is tied to a business, then select the option that best applies to your business.Choose the Ownership type that best represents the owner of the account. If you select a company, organization, or partnership as the ownership type, then enter the name of a key managerial person. The key managerial person can be a director, head of operations, or a person in charge of operations in your business.Choose Continue.Choose an AWS Support planOn the Select a support plan page, choose one of the available Support plans. For a description of the available Support plans and their benefits, see Compare AWS Support plans.Choose Complete sign up.Wait for account activationAfter you choose a Support plan, a confirmation page indicates that your account is being activated. Accounts are usually activated within a few minutes, but the process might take up to 24 hours.You can sign in to your AWS account during this time. The AWS home page might display a Complete Sign Up button during this time, even if you've completed all the steps in the sign-up process.When your account is fully activated, you receive a confirmation email. Check your email and spam folder for the confirmation email. After you receive this email, you have full access to all AWS services.Troubleshooting delays in account activationAccount activation can sometimes be delayed. If the process takes more than 24 hours, check the following:Finish the account activation process. You might have accidentally closed the window for the sign-up process before you added all the necessary information. To finish the sign-up process, open the registration page. Choose Sign in to an existing AWS account, and then sign in using the email address and password you chose for the account.Check the information associated with your payment method. Check Payment Methods in the AWS Billing and Cost Management console. Fix any errors in the information.Contact your financial institution. Financial institutions occasionally reject authorization requests from AWS for various reasons. Contact your payment method's issuing institution and ask that they approve authorization requests from AWS.Note: AWS cancels the authorization request as soon as it's approved by your financial institution. You aren't charged for authorization requests from AWS. Authorization requests might still appear as a small charge (usually one USD) on statements from your financial institution.Check your email for requests for additional information. If additional information is needed for the account activation process, then an email is sent to the AWS account root user email address. Check your email and spam folder to see if AWS needs any information from you to complete the activation process. Then, respond with the requested information and your account will be reviewed for activation.Try a different browser.Contact AWS Support. If you have additional questions, or can't provide the requested information, then contact AWS Support. Be sure to mention any troubleshooting steps that you already tried. If you can't sign in to your account, then contact the AWS Account Verification team using the AWS Account Verification support form.Note: Don't provide sensitive information, such as credit card numbers, in any correspondence with AWS.Improving the security of your AWS accountTo help secure your AWS resources, see Security best practices in AWS Identity and Access Management (IAM).Related informationWhat is the AWS Free Tier, and how do I use it?Avoiding unexpected chargesAfter I use AWS Organizations to create a member account, how do I access that account?What should I do if I didn't receive a call from AWS to verify my new account or the PIN I entered doesn't work?How do I resolve the "maximum number of failed attempts" error when I try to verify my AWS account by phone?Follow" | https://repost.aws/knowledge-center/create-and-activate-aws-account |
How can I capture and receive notifications about error events in my RDS for SQL Server instance? | I want to raise and capture error events on my Amazon Relational Database Service (Amazon RDS) for SQL Server DB instance. I also want to be notified whenever an error event occurs. How can I do this? | "I want to raise and capture error events on my Amazon Relational Database Service (Amazon RDS) for SQL Server DB instance. I also want to be notified whenever an error event occurs. How can I do this?Short descriptionSQL Server uses error handling to resolve object existence errors and run time errors in a T-SQL code. To handle errors like these, use the TRY and CATCH method. Then, use the RAISERROR command to generate customer errors and throw exceptions.ResolutionUse the TRY and CATCH method1. Use a TRY and CATCH statement to define a code block for error testing. Any code that you include between BEGIN TRY and END TRY is monitored for errors at the time of run. Whenever an error occurs in the block, it's transferred to the CATCH session. Then, depending on the code in the CATCH block, the action is performed. Depending on the issue, you can fix the error, report the error, or log the error into the SQL Server error logs.BEGIN TRY --code to try END TRY BEGIN CATCH --code to run if an error occurs --is generated in try END CATCH2. Create a custom message that raises a SQL Server error when it occurs. To do this, add RAISERROR to your store procedures or to a SQL Server that you want to monitor.RAISERROR ( { msg_id | msg_str | @local_variable } { , severity, state } [ , argument [ , ...n ] ] ) [ WITH option [ , ...n ] ]Examples of the TRY CATCH method and RAISERROR When you capture errors using the TRY CATCH method, create a custom message, and then raise the error into the SQL Server error logs. See this example:BEGIN TRYSELECT 1/0END TRYBEGIN CATCHDECLARE @Var VARCHAR(100)SELECT ERROR_MESSAGE()SELECT @Var = ERROR_MESSAGE()RAISERROR(@Var, 16,1) WITH LOGEND CATCHThis is an example of an error raised in the SQL Server logs:Error: 50000, Severity: 16, State: 1. Divide by zero error encountered.Monitor the SQL Server error logs and send notificationsTo monitor the SQL Server agentjob, add a script to the step to monitor and raise the error in the SQL Server error logs. You can then use these logs to send notifications.1. Edit your SQL Server job, and add the step. For type, choose T-SQL. Enter a database name, and then add this T-SQL in the command section:DECLARE @name NVARCHAR(128)select @name = name from msdb.dbo.sysjobs where job_id = $(ESCAPE_SQUOTE(JOBID));-- Temporary table to store the data of the datafile with low free storageDECLARE @jb TABLE ([step_id] int, [step_name] NVARCHAR(128), [message] NVARCHAR(4000), [run_status] int);insert into @jbselect hist.step_id, hist.step_name, hist.message, hist.run_status from msdb.dbo.sysjobhistory hist inner join (select a.job_id , convert(varchar(50),max(a.run_requested_date),112) as run_date , replace(convert(varchar(50),max(a.run_requested_date),108), ':', '') as run_time from msdb.dbo.sysjobs j inner join msdb.dbo.sysjobactivity a on j.job_id = a.job_id where j.name = @name and a.run_requested_date is not null group by a.job_id) ja on hist.job_id = ja.job_id and hist.run_date = ja.run_date and hist.run_time >= ja.run_time order by hist.step_iddeclare @error intselect @error = count(run_status) from @jb where run_status != 0if @error > 0RAISERROR('Automatic message from RDS for SQL Server Agent. Job test2 successful', 18,1) WITH LOG --\will raise the error when job successfulelseRAISERROR('Automatic message from RDS for SQL Server Agent. Job test2 failed', 16,1) WITH LOG --\will raise the error when job failed2. Configure the SQL Server job to go to the step that you created for the On failure action section.3. Run this procedure to confirm that the SQL Server job ran correctly and updated the job failed details in the SQL Server error logs. For more information see Viewing error and agent logs.EXEC rdsadmin.dbo.rds_read_error_log @index = 0, @type = 1;Example in the error logs:Msg 50000, Level 18, State 1, Line 33Automatic message from RDS for SQL Server Agent. Job test2 failedMsg 50000, Level 18, State 1, Line 29Automatic message from RDS for SQL Server Agent. Job test2 successful3. Configure notifications by publishing the SQL Server logs to Amazon CloudWatch. Modify the SQL Server using the Amazon RDS console. From the Log exports section, choose the logs that you want to publish to the CloudWatch logs. After you publish the SQL Server logs to Amazon CloudWatch, you can create metric filters to help you search the logs. Metric filters define the terms and patterns that Amazon CloudWatch searches the log data for. Then, the metric filters turn this log data into numerical CloudWatch metrics that you can set alarms for.For more information, see How can I receive SNS notifications about Amazon RDS for SQL Server error and agent log events that match a CloudWatch filter pattern?Follow" | https://repost.aws/knowledge-center/rds-sql-server-error-notifications |
How do I resolve the "Custom Named Resource already exists in stack" error in AWS CloudFormation? | "My AWS CloudFormation stack fails to create a resource, and I receive an error message telling me that my resource already exists in the stack. How do I resolve this error?" | "My AWS CloudFormation stack fails to create a resource, and I receive an error message telling me that my resource already exists in the stack. How do I resolve this error?Short descriptionWhen you create a custom-named resource with the same name and set to the same value as another resource, CloudFormation can't differentiate between them. You then receive the error message, "Custom Named Resource already exists in stack." Each custom-named resource has a unique Physical ID. You can't reuse the Physical ID for most resources that are defined in CloudFormation.You can resolve this error by changing the name of the failing resource to a unique name. Or, you can choose to not define the custom name for that resource. If you don't set a custom name, then CloudFormation generates a unique name when the resource is created. This unique name won't conflict with your existing resources.Resolution1. In the CloudFormation template that contains your failing resource, check if other explicitly declared resources have the same name as your failed resource.In the following example, the stack fails because each AWS Identity and Access Management (IAM) ManagedPolicy resource (ManagedPolicyName) has the same custom name (FinalS3WritePolicy).S3DeletePolicy: Type: AWS::IAM::ManagedPolicy Properties: ManagedPolicyName: Fn::Join: - _ - - FinalS3WritePolicy - Ref: EnvType PolicyDocument:................S3WritePolicy: Type: AWS::IAM::ManagedPolicy Properties: ManagedPolicyName: Fn::Join: - _ - - FinalS3WritePolicy - Ref: EnvType PolicyDocument:................2. Update the name of any resource that has a duplicate name. For example, change the first instance of FinalS3WritePolicy in the preceding example to FinalS3DeletePolicy. Or, remove the custom name.In the following examples, Stack A succeeds because each IAM ManagedPolicy resource has a unique custom name (FinalS3DeletePolicy and FinalS3WritePolicy). Stack B succeeds because no custom name values are set for either ManagedPolicyName properties. When the resource is created, CloudFormation automatically generates a unique name for each IAM ManagedPolicy resource in Stack B.Stack A:S3DeletePolicy: Type: AWS::IAM::ManagedPolicy Properties: ManagedPolicyName: Fn::Join: - _ - - FinalS3DeletePolicy - Ref: EnvType PolicyDocument:................S3WritePolicy: Type: AWS::IAM::ManagedPolicy Properties: ManagedPolicyName: Fn::Join: - _ - - FinalS3WritePolicy - Ref: EnvType PolicyDocument:................Stack B:S3DeletePolicy: Type: AWS::IAM::ManagedPolicy Properties: PolicyDocument:................S3WritePolicy: Type: AWS::IAM::ManagedPolicy Properties: PolicyDocument:................Note: You can use the resolution in this article for related errors involving resources that exist in a different stack or resources created outside of CloudFormation.Follow" | https://repost.aws/knowledge-center/cloudformation-stack-resource-failure |
How do I customize a resource property value when there is a gap between CDK higher level constructs and a CloudFormation resource? | I want to modify a resource property value when there's a gap between AWS Cloud Development Kit (AWS CDK) L3/L2 constructs and an AWS CloudFormation. | "I want to modify a resource property value when there's a gap between AWS Cloud Development Kit (AWS CDK) L3/L2 constructs and an AWS CloudFormation.Short descriptionIn some cases, higher level constructs (L3 and L2) will have a resource property that can't be modified. To work around this issue, use AWS CDK escape hatches to move to a lower level of abstraction and modify the resource property value.Example of an Amazon Virtual Private Cloud (Amazon VPC) using AWS CDK Python:vpc = ec2.Vpc(self, "MyCDKVPC", max_azs=2, cidr='60.0.0.0/16', subnet_configuration=[ ec2.SubnetConfiguration( name="public", subnet_type=ec2.SubnetType.PUBLIC, cidr_mask=24, ), ec2.SubnetConfiguration( name="private", subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT, cidr_mask=24, ) ] )The preceding example Amazon VPC is defined using the L2 construct aws_cdk.aws_ec2.Vpc, which has two Availability Zones (AZ) with the CidrBlock set to 60.0.0.0/16. The example Amazon VPC also contains two PublicSubnets and two PrivateSubnets spread across the two AZs.When generating a CloudFormation to check the AWS::EC2::Subnet resource, the CidrBlock will start from the first IP range 60.0.0.0/24 and can't be modified.PublicSubnet1: Type: AWS::EC2::Subnet Properties: ... CidrBlock: 60.0.0.0/24 PublicSubnet2: Type: AWS::EC2::Subnet Properties: ... CidrBlock: 60.0.1.0/24 PrivateSubnet1: Type: AWS::EC2::Subnet Properties: ... CidrBlock: 60.0.2.0/24 PrivateSubnet2: Type: AWS::EC2::Subnet Properties: ... CidrBlock: 60.0.3.0/24To resolve this issue, do these steps:Retrieve generated subnets inside the Amazon VPC (which are the L2 PublicSubnet construct and the L2 PrivateSubnet construct).Use the AWS CDK escape hatch, node.default_child, on the L2 constructs and cast it as the L1 CfnSubnet resource.Modify the cidr_block directly or by using raw overrides.Verify the cidr_block update on the CloudFormation template.ResolutionTo use an AWS CDK escape hatch to modify the CidrBlock value of a lower abstraction layer for PublicSubnets or PrivateSubnets, do these steps:Important: The following steps apply to PublicSubnets. To apply to PrivateSubnets, replace all instances of PublicSubnet with PrivateSubnet.1. Retrieve a list of PublicSubnets in the Amazon VPC by using the vpc.public_subnets attribute:public_subnets = vpc.public_subnetsNote: Each element inside the generated list is a L2 PublicSubnet construct. See the following example printout:########## confirm public_subnets is a L2 construct ##########print(public_subnets) # printout: [<aws_cdk.aws_ec2.PublicSubnet object at 0x7f3f48acb490>, <aws_cdk.aws_ec2.PublicSubnet object at 0x7f3f48acb0502. Use the node.default_child attribute on the desired L2 construct (for this example aws_cdk.aws_ec2.PublicSubnet). Then, cast it as the L1 CfnSubnet resource (for this example aws_cdk.aws_ec2.CfnSubnet):########## confirm cfn_public_subnet is a L1 construct ##########for public_subnet in public_subnets: cfn_public_subnet = public_subnet.node.default_child print(cfn_public_subnet) # printout: <aws_cdk.aws_ec2.CfnSubnet object at 0x7f3f48acb710> # printout: <aws_cdk.aws_ec2.CfnSubnet object at 0x7f3f48acb950>3. After accessing the L1 CfnSubnet resource, modify the CidrBlock on the L1 CfnSubnet construct by using one of the following methods:Modify the cidr_block directlyModify the cidr_block by using raw overridesExample of modifying the cidr_block directly or by using raw overrides:public_subnets = vpc.public_subnets public_subnet_index = 0 for public_subnet in public_subnets: cfn_public_subnet = public_subnet.node.default_child ########### 1) modify the cidr_block property directly ########### cfn_public_subnet.cidr_block = "60.0." + str(public_subnet_index + example_start_value) + ".0/24") ########### 2) modify the cidr_block by using raw overrides ########### cfn_public_subnet.add_property_override("CidrBlock", "60.0." + str(public_subnet_index + example_start_value) + ".0/24") public_subnet_index += 1Important: Make sure to replace example_start_value with your specified value. For example, if you want to modify your public_subnet to start from 60.0.100.0/24, then set your example_start_value to 100.4. Verify the CidrBlock update inside the AWS::EC2::Subnet resource on the newly generated CloudFormation template by running the cdk synth command:cdk synthExample output:PublicSubnet1: Type: AWS::EC2::Subnet Properties: ... CidrBlock: 60.0.100.0/24 <--- PublicSubnet2: Type: AWS::EC2::Subnet Properties: ... CidrBlock: 60.0.101.0/24 PrivateSubnet1: Type: AWS::EC2::Subnet Properties: ... CidrBlock: <example_custom_value>/24 PrivateSubnet2: Type: AWS::EC2::Subnet Properties: ... CidrBlock: <example_custom_value>/24Related informationPrivateSubnetvpc.private_subnetsFollow" | https://repost.aws/knowledge-center/cdk-customize-property-values |
How do I resolve the "CharacterStringTooLong (Value is too long) encountered with {Value}" error that I receive when creating a TXT record using DKIM syntax? | "I tried to create a DKIM text resource record that a third party provided in my Amazon Route 53 hosted zone. However, I got the following error: "CharacterStringTooLong (Value is too long) encountered with {Value}."" | "I tried to create a DKIM text resource record that a third party provided in my Amazon Route 53 hosted zone. However, I got the following error: "CharacterStringTooLong (Value is too long) encountered with {Value}."Short descriptionDNS TXT records can contain up to 255 characters in a single string. You must split TXT record strings that are over 255 characters into multiple text strings within the same record.Note: If the value is split, then DKIM functionality doesn't break.Resolution1. Open the resource record that you received from your third-party provider.2. To adhere to the 255 character maximum for a single Route 53 TXT record, split the DKIM key value into two parts. To do this, follow these steps:Copy the DKIM key value from the resource record.Paste the DKIM key value in a new line of a text editor.Split the DKIM key value into two parts, and then enclose each part in double quotation marks. For example, the value for "long_string" is split into "long_""string".Note: Don't add a line break between the two parts.3. Open the Route 53 console.4. In the navigation pane, choose Hosted zones.5. Select your hosted zone.6. Choose Create Record Set.7. In the Create Record Set panel, complete the following steps:For Name, enter the domain key identifier.For Type, choose TXT.For Alias, keep the default selection of No.For TTL, enter the number of seconds. The default value of 300 is typically sufficient.For Value, copy the split DKIM key value that you created in step 2 from your text editor. Paste the split value in the Value field.Choose Create.8. Use dig or nslookup to confirm that the TXT record is presented as a single entry.dig:$ dig selector_key_1._domainkey.domain.com txt ...;; ANSWER SECTION: selector_key_1._domainkey.domain.com. 60 IN TXT "v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAz1xhXc+vJKhQJUch86o8Ia2+L/AYo4d5eRVrPMMWzz4EiM4eB4QC1hJ83YMCHLv5dDN2lJ3KWSd5tGOxF/FRj1KdN+Jdf+BVwuklBFO8IrDtMz/lk2CJjF8jlgIUmQAjs3lc/8Bee+" "IQeB2tLX9UWvQMpI3aZuh6Ym6hcvLnbEkALWaMQvqwgxZs1qF6t5VKMjWeNNWIScyNTYL4Ud8wDiBcWh492HustfGUxrl5zmRfEl8BzCbrOqpKPLBmk/xrHRw9PHIJyYOaZA2PFqVcp6mzxjyUmn0DH9HXdhIznflBoIOLL1dm77PyDOKdEWRkSLMCA72mZbFr9gxda72ocQIDAQAB"nslookup: > nslookup -q=TXT selector_key_1._domainkey.domain.com...Non-authoritative answer:selector_key_1._domainkey.domain.com. text = "v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAz1xhXc+vJKhQJUch86o8Ia2+L/AYo4d5eRVrPMMWzz4EiM4eB4QC1hJ83YMCHLv5dDN2lJ3KWSd5tGOxF/FRj1KdN+Jdf+BVwuklBFO8IrDtMz/lk2CJjF8jlgIUmQAjs3lc/8Bee+" "IQeB2tLX9UWvQMpI3aZuh6Ym6hcvLnbEkALWaMQvqwgxZs1qF6t5VKMjWeNNWIScyNTYL4Ud8wDiBcWh492HustfGUxrl5zmRfEl8BzCbrOqpKPLBmk/xrHRw9PHIJyYOaZA2PFqVcp6mzxjyUmn0DH9HXdhIznflBoIOLL1dm77PyDOKdEWRkSLMCA72mZbFr9gxda72ocQIDAQAB"Related informationTXT record typeFollow" | https://repost.aws/knowledge-center/route53-resolve-dkim-text-record-error |
How do I resolve the error "CannotPullContainerError: You have reached your pull rate limit" in Amazon ECS? | "When I try to pull images from Docker Hub, my Amazon Elastic Container Service (Amazon ECS) task fails with the following error: "CannotPullContainerError: inspect image has been retried 5 time(s): httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/manifests/sha256:2bb501e6429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"" | "When I try to pull images from Docker Hub, my Amazon Elastic Container Service (Amazon ECS) task fails with the following error: "CannotPullContainerError: inspect image has been retried 5 time(s): httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/manifests/sha256:2bb501e6429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"Short descriptionThis error occurs when you try to pull an image from the public Docker Hub repository (on the Docker Hub website) after you reach your Docker pull rate limit (from the Docker Hub website). Exceeding your rate limit returns an HTTP status code of 429. Docker Hub uses IP addresses to authenticate the users, and pull rate limits are based on individual IP addresses. For anonymous users, the rate limit is set to 100 pulls per 6 hours per IP address. For authenticated users with a Docker ID, the pull rate is set to 200 pulls per 6-hour period. If your image pull request exceeds these limits, Amazon ECS denies these requests until the 6-hour window elapses. If you're running your Amazon ECS or Amazon Elastic Kubernetes Service (Amazon EKS) workload, then data is pulled through a NAT gateway with a fixed IP address. In this case, the request is throttled when you exceed the pull limit.Use the AWSSupport-TroubleshootECSTaskFailedToStart runbook to troubleshoot the errors for Amazon ECS tasks that fail to start. This automation reviews the following configurations:Network connectivity to the configured container registryMissing AWS Identity and Access Management (IAM) permissions that the execution role requiresVirtual private cloud (VPC) endpoint connectivitySecurity group rule configurationAWS Secrets Manager secrets referencesLogging configurationResolutionImportant:Use the AWSSupport-TroubleshootECSTaskFailedToStart runbook in the same AWS Region where your ECS cluster resources are located.When using the runbook, you must use the most recently failed Task ID. If the failed task is part of Amazon ECS, then use the most recently failed task in the service. The failed task must be visible in ECS:DescribeTasks during the automation execution. By default, stopped ECS tasks are visible for 1 hour after entering the Stopped state. Using the most recently failed task ID prevents the task state cleanup from interrupting the analysis during the automation.Note: If the runbook's output doesn't provide recommendations, then use one of the manual troubleshooting approaches in the following section.To run the AWSSupport-TroubleshootECSTaskFailedToStart runbook:1. Open the AWS Systems Manager console.2. In the navigation pane, under Change Management, choose Automation.3. Choose Execute automation.4. Choose the Owned by Amazon tab.5. Under Automation document, search for TroubleshootECSTaskFailedToStart.6. Select the AWSSupport-TroubleshootECSTaskFailedToStart card.Note: Make sure that you select the radio button on the card and not the hyperlinked automation name.7. Choose Next.Note: After execution, analysis results are populated in the Global output section. However, wait for the status of the document to move to Success. Also, watch for any exceptions in the Output section.8. For Execute automation document, choose Simple execution.9. In the Input parameters section, for AutomationAssumeRole, enter the ARN of the role that allows Systems Manager Automation to perform actions.Note: Be sure that either the AutomationAssumeRole or the IAM user or role has the required IAM permissions to run the AWSSupport-TroubleshootECSTaskFailedToStart runbook. If you don't specify an IAM role, then Systems Manager Automation uses the permissions of the IAM user or role that runs the runbook. For more information about creating the assume role for Systems Manager Automation, see Task 1: Create a service role for Automation.10. For ClusterName, enter the cluster name where the task failed to start.11. For TaskId, enter the identification for the task that most recently failed.12. Choose Execute.Based on the output of the automation, use one of the following manual troubleshooting steps.Copy public images into an Amazon ECR private registryCreate an Amazon Elastic Container Registry (Amazon ECR) repository, and then push the image into this new repository. When you pull the images from the Amazon ECR repository, you might avoid exceeding the Docker Hub pull limit.1. Run a command similar to the following to pull the image from Docker Hub:docker pull example-image2. Run a command similar to the following to authenticate your Docker client to access the Amazon ECR registry:aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin 1111222233334444.dkr.ecr.eu-west-1.amazonaws.com3. Run a command similar to the following to tag the image to push to your repository:docker tag myrepository:latest 1111222233334444.dkr.ecr.eu-west-1.amazonaws.com/myrepository:latest4. Run a command similar to the following to push the Docker image to an Amazon ECR registry:docker push 1111222233334444.dkr.ecr.eu-west-1.amazonaws.com/myrepository:latest5. Run a command similar to the following to update the Docker file to use the newly pushed Amazon ECR image as the base image:FROM 1111222233334444.dkr.ecr.eu-west-1.amazonaws.com/myrepository:tagIn the preceding commands, replace the following values with your values:example-image with the name of the public image that you want to push1111222233334444 with your account IDmyrepository:latest with the name of your Amazon ECR registryeu-west-1 with the Region of your choiceAuthenticate the Docker Hub pullWhen you authenticate with Docker Hub, you have more rate limits as an authenticated user and are rate limited based on the Docker username. Store your Docker Hub username and password as a secret in AWS Secrets Manager, and then use this secret to authenticate to Docker Hub.Create a Secrets Manager secret for Docker Hub credentialsTo create a secret for your Docker Hub credentials, use the instructions under the section To create a basic secret in Turning on private registry authentication.Update your task execution IAM roleTo grant the Amazon ECS task access to the secret, manually add the required permissions as an inline policy to the task execution role.1. Open the IAM console.2. In the navigation pane, choose Roles.3. Search the list of roles for ecsTaskExecutionRole, and then choose the role to view the attached policies.4. On the Permissions tab, choose Add permissions, and then choose Create inline policy.5. In the Create policy page, choose JSON, and then copy and paste the following policy:{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Action": ["secretsmanager:GetSecretValue","kms:Decrypt"],"Resource": ["arn:aws:secretsmanager:eu-west-1:1111222233334444:secret:dockerhub-0knT","arn:aws:kms:eu-west-1:1111222233334444:key/mykey"]}]}In the preceding policy, replace the following values with your values:1111222233334444 with your account IDeu-west-1 with the Region of your choicemykey with your AWS KMS keyNote: Include kms:Decrypt only if your key uses a custom AWS Key Management Service (AWS KMS) key. Add the ARN for your custom key as a resource.6. Choose Review policy.7. For Name, enter the name of the policy (ECSSecrets).8. Choose Create policy.Create a task definition that uses the secret for Docker authenticationFollow the instructions in Creating a task definition using the classic console to create your Amazon ECS task definition. For Task execution role, select the task execution IAM role that you updated in the preceding section.In the Container definitions section, complete the following steps:1. Choose Add container.2. For Container name, enter the name of your container.3. For Image, enter the name of the image, or include the path to your private image (example: repository-url/image.tag).4. Choose Private repository authentication.5. For Secrets Manager ARN or name, enter the ARN of the secret that you created.6. Choose Add.Create an Amazon ECS cluster and run the Amazon ECS taskCreate an Amazon ECS cluster. Then, use the task definition that you created to run the task.Use Amazon ECR public registry for public container imagesIdentify the public images that you're using in the Docker file. Use the appropriate search filters to search for these images on the Amazon ECR Public Gallery. You don't need to authenticate to browse the public repositories and pull images. The Amazon ECR Public contains popular base images, including operating systems, AWS-published images, Kubernetes add-ons, and artifacts. Pull images from the Amazon ECR public registry to avoid reaching the Docker Hub's rate limit.Use these images as the source for the container image in your task definition:ContainerDefinitions: [ { ... Image: 'public.ecr.aws/cloudwatch-agent/cloudwatch-agent:latest' ... } ]You can also choose to use these images as the base image in your Docker file:Docker File FROM public.ecr.aws/amazonlinux/amazonlinux:latestUpgrade to a Docker Pro or Team subscriptionIf you require more pulls, then upgrade your plan to a Docker Pro or Team subscription that offers 50,000 pulls in a 24-hour period. For more information on pricing plans, see Pricing and subscriptions (from the Docker Hub website).Related informationAmazon ECR pricingAmazon ECR Public service quotasFollow" | https://repost.aws/knowledge-center/ecs-pull-container-error-rate-limit |
Why is my EMR cluster not terminating or terminating earlier than expected when I'm using an auto-termination policy? | I have an auto-termination policy configured for my Amazon EMR cluster. The cluster either keeps running as active or terminates earlier than the idle timeout configured in the auto-termination policy. | "I have an auto-termination policy configured for my Amazon EMR cluster. The cluster either keeps running as active or terminates earlier than the idle timeout configured in the auto-termination policy.Short descriptionWhen you create an EMR cluster, you can turn on the auto-termination policy. The auto-termination policy terminates the cluster after a specific amount of idle time.Resolution1. Make sure that the Amazon Elastic Compute Cloud (Amazon EC2) instance profile role, EMR_EC2_DefaultRole, has the following permissions. If the EMR EC2 instance profile role doesn't have these permissions, then the cluster stays active even if it meets the idle timeout requirement.{ "Version": "2012-10-17", "Statement": { "Sid": "AllowAutoTerminationPolicyActions", "Effect": "Allow", "Action": [ "elasticmapreduce:PutAutoTerminationPolicy", "elasticmapreduce:GetAutoTerminationPolicy", "elasticmapreduce:RemoveAutoTerminationPolicy" ], "Resource": "your-resources" }In Amazon EMR versions 5.34 to 5.36 and 6.4.0 or later, a cluster is idle when the following are true:There are no active YARN applications.HDFS utilization is below 10%.There are no active EMR notebook or EMR Studio connections.There are no on-cluster application user interfaces in use.In Amazon EMR versions 5.30.0 to 5.33.1 and 6.1.0 to 6.3.0, a cluster is idle when the following are true:There are no active YARN applications.HDFS utilization is below 10%.The cluster has no active Spark jobs.2. Make sure that the metrics-collector process is running. The metrics-collector process collects the metrics to determine auto termination. Run the following commands to check the metrics-collector process:ps -ef|grep metrics-collector-or-systemctl status metricscollector.serviceFor more information, see How do I restart a service in Amazon EMR?3. When you turn on auto-termination using an auto-termination policy, Amazon EMR emits the AutoTerminationClusterIdle Amazon CloudWatch metric at a one-minute granularity. This metric evaluates if the cluster meets the idle state requirement. If this metric shows "1", then the cluster is idle. If it shows "0", then the cluster is still active.View the EMR cluster's CloudWatch metrics and verify that the AutoTerminationisCluseterIdle CloudWatch metric is continuously "1" in the cluster. If it's continuously "1", then the cluster qualifies for auto-termination.Related informationUsing an auto-termination policyMonitor metrics with CloudWatchFollow" | https://repost.aws/knowledge-center/emr-troubleshoot-cluster-termination |
How can I migrate WorkSpace users between AWS Regions or AWS accounts? | I need to migrate my Amazon WorkSpaces fleet to a different AWS Region due to latency issues. I also need to migrate some of my WorkSpaces fleet to a different AWS account for billing purposes. How might I do this? | "I need to migrate my Amazon WorkSpaces fleet to a different AWS Region due to latency issues. I also need to migrate some of my WorkSpaces fleet to a different AWS account for billing purposes. How might I do this?ResolutionWorkSpaces doesn’t offer an authoritative mechanism to migrate a WorkSpace with user configurations and user data between AWS accounts or Regions. When a WorkSpace is created, its managed resources are bound to its directory service and subnet. However, there are methods you can use to make WorkSpaces images accessible across AWS accounts or Regions.Cross-Region redirectionAs part of its business continuity features, WorkSpaces offers a way to redirect users to another existing WorkSpace in a different AWS account or Region.For more information, see Cross-Region redirection for Amazon WorkSpaces.Sharing WorkSpace imagesWorkSpace images can be shared or migrated between AWS accounts and Regions. WorkSpaces images allow users to launch WorkSpaces resources with predefined applications and operating systems.For more information, see Share or unshare a custom WorkSpaces image and Copy a custom WorkSpaces image.Related informationHow do I create a WorkSpaces image?How do I share WorkSpaces images or BYOL images with other AWS accounts?How do I troubleshoot WorkSpaces image creation issues?Follow" | https://repost.aws/knowledge-center/workspace-user-migration |
How do I create and attach an internet gateway to a VPC? | I want to create and attach an internet gateway to a virtual private cloud (VPC) so that resources in the VPC can communicate with the internet. | "I want to create and attach an internet gateway to a virtual private cloud (VPC) so that resources in the VPC can communicate with the internet.ResolutionFor the resources in a VPC to send and receive traffic from the internet, make sure you complete the following steps:You must attach an internet gateway to the VPC. For more information, see Create and attach an internet gateway.The route tables that are associated with your public subnet (including custom route tables) must have a route to the internet gateway.The security groups and network access control lists (network ACLs) that are associated with your VPC must allow traffic to and from the internet.Your subnet's resources must have a public IP address or an attached Elastic IP address.For more information on instructions for each of these steps, see Connect to the internet using an internet gateway.Follow" | https://repost.aws/knowledge-center/create-attach-igw-vpc |
How can I reduce my Route 53 costs? | The charges for Amazon Route 53 usage on my AWS bill are higher than I expected. What can I do to reduce my Route 53 costs? | "The charges for Amazon Route 53 usage on my AWS bill are higher than I expected. What can I do to reduce my Route 53 costs?Short descriptionRoute 53 charges are based on actual usage of the service for:Hosted zonesQueriesHealth checksDomain namesWith Route 53, you pay only for what you use. For more details, see Amazon Route 53 pricing.To reduce higher than expected Route 53 costs:Delete unused hosted zonesCreate alias records where possibleIncrease the TTL for the recordsReview your traffic policy recordsReview your Resolver endpointsReview your health checksResolutionDelete unused hosted zonesImportant: Be sure to delete only the hosted zones that you don't need. Route 53 can't restore records that you delete in your hosted zone, or the hosted zone itself.There's a monthly charge for each hosted zone created in Route 53. When you create a hosted zone for your domain, Route 53 assigns a set of four name servers to the hosted zone. For public DNS resolution, only the hosted zone that has the name servers added at the domain registrar are used to resolve queries. To minimize costs related to the hosted zone, delete any unused hosted zones.Create alias records where possibleThere's a charge for most DNS queries answered by Route 53. The exception to this policy is queries to alias records mapped to resources provided at no cost, including:Elastic Load Balancing instancesAmazon CloudFront distributionsAWS Elastic Beanstalk environmentsAmazon API GatewaysVirtual Private Cloud (VPC) endpointsAmazon Simple Cloud Storage (Amazon S3) website bucketsFor a complete list of AWS resource types that are supported by alias records, see Value/route traffic to.If your resource is supported by alias records, then edit the record to specify the record type as Alias.Increase the TTLThere's a charge for most DNS queries answered by Route 53. The exception to this policy is queries to alias records mapped to resources provided at no cost, including:Elastic Load Balancing instancesCloudFront distributionsAWS Elastic Beanstalk environmentsAPI GatewaysVPC endpointsAmazon S3 website bucketsIf you configure a higher TTL for your records, then the intermediate resolvers cache the records for longer time. As a result, there are fewer queries received by the name servers. This configuration reduces the charges corresponding to the DNS queries answered. However, higher TTL slows the propagation of record changes because the previous values are cached for longer periods. Lower TTL results in faster propagation. However, lower TTL means that more queries arrive at the name servers because the cached values expire sooner.Review your traffic policy recordsYou create a policy record when you associate a Route 53 traffic flow policy with a specific DNS name (such as www.example.com). The traffic policy manages traffic for that specific DNS name. Traffic polices are generally a best practice for the combination of the routing policies and for the geoproximity routing policy. There's no charge for traffic policies that aren't associated with a DNS name through a policy record.To associate multiple domains with the same traffic policy, create an alias record in the same hosted zone as the traffic policy record. For example, you can create a traffic policy record for example.com and an alias record for www.example.com that references the traffic policy record.To further reduce costs, review your traffic policy records. Determine if the traffic policy records can be replaced with simple records or other routing policies.Review your Resolver endpointsA Route 53 Resolver endpoint requires two or more IP addresses. Each IP address corresponds with one elastic network interface. Elastic network interfaces are charged at a rate of $0.125 per hour, per interface.A single outbound endpoint can be shared among multiple VPCs that were created by multiple accounts within the same Region. If you configured multiple outbound endpoints with different VPCs in the same Region, then you incur additional charges. To reduce costs, consolidate your endpoints using the shared mechanism rather than using individual endpoints.Delete unnecessary health checksWhen you associate health checks with an endpoint, health check requests are sent to the endpoint's IP address. These health check requests are sent to validate that the requests are operating as intended. Health check charges are incurred based on their associated endpoints. To avoid health check charges, delete any health checks that aren't used with an RRset record and are no longer required.Be sure to configure Evaluate Target Health (ETH) wherever possible as an alternative to health checks. This strategy helps avoid health check costs. For more information, see:How Amazon Route 53 chooses records when health checking is configuredWhy is my alias record pointing to an Application Load Balancer marked as unhealthy when I’m using “Evaluate Target Health”?How health checks work in simple Amazon Route 53 configurationsFollow" | https://repost.aws/knowledge-center/route-53-reduce-costs |
How do I resolve HTTP 502 errors from API Gateway REST APIs with Lambda proxy integration? | "I configured Amazon API Gateway proxy integration to work with an AWS Lambda function. When I call my REST API, I receive a configuration error and an HTTP 502 status code. How do I resolve the issue?" | "I configured Amazon API Gateway proxy integration to work with an AWS Lambda function. When I call my REST API, I receive a configuration error and an HTTP 502 status code. How do I resolve the issue?Short descriptionIf your Lambda function's permissions are incorrect or the response to the API request isn't formatted correctly, then API Gateway returns an HTTP 502 status code.Example HTTP 502 error messages as it appears in Amazon CloudWatch LogsWed Aug 03 08:10:00 UTC 2022 : Execution failed due to configuration error: WE Aug 03 09:10:00 UTC 2022 : Method completed with status: 502-or-Wed Aug 03 08:20:33 UTC 2022 : Lambda execution failed with status 200 due to customer function error: [Errno 13] Permission denied: '/var/task/lambda_function.py'. Lambda request id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxWed Aug 03 08:20:33 UTC 2022 : Method completed with status: 502For API Gateway to handle a Lambda function's response, the function must return output according to the following JSON format:{ "isBase64Encoded": true|false, "statusCode": httpStatusCode, "headers": { "headerName": "headerValue", ... }, "body": "..."}For more information, see Output format of a Lambda function for proxy integration.Resolution1. Review your REST API's CloudWatch metrics with the API dashboard in API Gateway.-or-Review your REST API's log events in the Amazon CloudWatch console.2. In the logs, review the format of your Lambda function's response to your API. If the response isn't in the required JSON format, then reformat it.3. Verify that the Lambda function's resource policy allows access to invoke the function with API Gateway.If the Lambda function execution fails due to a package permission issue, then verify the permissions. For instructions, see How do I troubleshoot "permission denied" or "unable to import module" errors when uploading a Lambda deployment package?5. Verify that the Lambda function handler name and configuration are valid.6. If the Lambda execution fails during runtime, check the Lambda function logs and update the code.7. After making your changes, you can test your REST API method in the API Gateway console.Example Node.js Lambda function with the response correctly formattedNote: Node.js Lambda functions support async handlers and non-async handlers. The following example function uses an async handler.exports.handler = async (event) => { const responseBody = { "key3": "value3", "key2": "value2", "key1": "value1" }; const response = { "statusCode": 200, "headers": { "my_header": "my_value" }, "body": JSON.stringify(responseBody), "isBase64Encoded": false }; return response;};In this example response, there are four fields:statusCode is an integer interpreted by API Gateway that's returned to the caller of the API method.headers are collected and then sent back with the API Gateway response.body must be converted to a string if you're returning data as JSON.Note: You can use JSON.stringify to handle this in Node.js functions. Other runtimes require different solutions, but the concept is the same.isBase64Encoded is a required field if you're working with binary data. If you don't use this field, it's a best practice to set the value to FALSE.Related informationSetting up CloudWatch logging for a REST API in API GatewayMonitoring REST APIs with Amazon CloudWatch metricsFollow" | https://repost.aws/knowledge-center/malformed-502-api-gateway |
How do I attach or replace an instance profile on an Amazon EC2 instance? | How do I attach or replace an instance profile on an Amazon Elastic Compute Cloud (Amazon EC2) instance? | "How do I attach or replace an instance profile on an Amazon Elastic Compute Cloud (Amazon EC2) instance?ResolutionFollow these instructions to attach or replace an instance profile on an EC2 instance.Note:If you created the AWS Identity and Access Management (IAM) role using the AWS Management Console and choose EC2 as the AWS service, then the instance profile and role names are the same.If you created the IAM role using the AWS Command Line Interface (AWS CLI), then you must also create the instance profile using the AWS CLI. The IAM role name and instance profile name can be different.If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.You must have permission to launch EC2 instances and permission to pass IAM roles. For more information, see Permissions required for using roles with Amazon EC2.AWS Management ConsoleOpen the Amazon EC2 console, and then choose Instances.Choose the instance that you want to attach an IAM role to.Check the IAM role under the Details pane to confirm if an IAM role is attached to the Amazon EC2 instance. If an IAM role is attached, then be sure that changing the role attached to this Amazon EC2 instance doesn't affect your applications or access to AWS services. Note: The EC2 instance permissions change based on the IAM role attached, and applications running on the instance can be affected.Choose Actions, Security, and then choose Modify IAM role.Note: Amazon EC2 uses an instance profile as a container for an IAM role. For more information, see Instance profiles.In the Choose IAM role dropdown list, choose the instance profile that you want to attach.Choose Save.For more information, see Creating an IAM role (Console).AWS Command Line Interface (AWS CLI)Add the role to an instance profile before attaching the instance profile to the EC2 instance.1. If you haven't already created an instance profile, then run the following AWS CLI command:aws iam create-instance-profile --instance-profile-name EXAMPLEPROFILENAME2. Run the following AWS CLI command to add the role to the instance profile:$ aws iam add-role-to-instance-profile --instance-profile-name EXAMPLEPROFILENAME --role-name EXAMPLEROLENAME3. Run the following AWS CLI command to attach the instance profile to the EC2 instance:$ aws ec2 associate-iam-instance-profile --iam-instance-profile Name=EXAMPLEPROFILENAME --instance-id i-012345678910abcdeNote: If you have an instance profile associated with the EC2 instance, then the associate-iam-instance-profile command fails. To resolve this issue, run the describe-iam-instance-profile-associations command to get the associated instance ID. Then, do one of the following:Run the replace-iam-instance-profile-association command to replace the instance profile.-or-Run the disassociate-iam-instance-profile command to detach the instance profile, and then rerun the associate-iam-instance-profile command.4. Run the following AWS CLI command to verify that the IAM role is attached to the instance:$ aws ec2 describe-iam-instance-profile-associations --filters Name=instance-id,Values=i-012345678910abcdeRelated informationUsing an IAM role to grant permissions to applications running on Amazon EC2 instancesUsing instance profilesTroubleshooting IAM and Amazon EC2Follow" | https://repost.aws/knowledge-center/attach-replace-ec2-instance-profile |
Why can't I publish or subscribe to an Amazon SNS topic? | I can't publish or subscribe to an Amazon Simple Notification Service (Amazon SNS) topic. How do I troubleshoot the issue? | "I can't publish or subscribe to an Amazon Simple Notification Service (Amazon SNS) topic. How do I troubleshoot the issue?Short descriptionAn AWS Identity and Access Management (IAM) resource or identity can't publish or subscribe to an Amazon SNS topic without the required permissions.To grant the IAM permissions required to publish or subscribe to an Amazon SNS topic, do one of the following based on your use case.Note: Amazon SNS uses IAM identity-based and Amazon SNS resource-based access policies together to grant access to SNS topics. You can use an IAM policy to restrict user or role access to Amazon SNS actions and topics. An IAM policy can restrict access only to users within your AWS account, not to other AWS accounts. For more information, see IAM and Amazon SNS policies together.ResolutionTo grant another AWS service permissions to publish to an Amazon SNS topicYour Amazon SNS topic's resource-based policy must allow the other AWS service to publish messages to the topic. Review your topic's access policy to confirm that it has the required permissions, and add them if needed.To add the required permissions, edit your Amazon SNS topic's access policy so that it includes the following permissions statement.Important: Replace <service> with the AWS service.{ "Sid": "Allow-AWS-Service-to-publish-to-the-topic", "Effect": "Allow", "Principal": { "Service": "<service>.amazonaws.com" }, "Action": "sns:Publish", "Resource": "arn:aws:sns:your_region:123456789012:YourTopicName"}Important: These permissions allow anyone that has access to your SNS topic's Amazon Resource Name (ARN) to publish messages to the topic through the service endpoint. You can add global condition keys to restrict the publishing permissions to specific resources. The following example uses the arnLike condition operator and the aws:SourceArn global condition key. For more information, see Example cases for Amazon SNS access control.Example IAM policy that restricts Amazon SNS publishing permissions to specific resourcesImportant: Replace <region> with the resource's AWS Region. Replace <account-id> with your account ID. Replace <resource-name> with the resource's name. Replace <service> with the AWS service.{ "Sid": "Allow-AWS-Service-to-publish-to-the-topic", "Effect": "Allow", "Principal": { "Service": "<service>.amazonaws.com" }, "Action": "sns:Publish", "Resource": "arn:aws:sns:your_region:123456789012:YourTopicName", "Condition": { "ArnLike": { "aws:SourceArn": "arn:aws:<service>:<region>:<account-id>:<resource-type>:<resource-name>" } }}Note: Amazon S3 doesn't support FIFO SNS topics. If there's an S3 ARN on the topic policy, then make sure that it isn't a path to a bucket folder. For example: arn:aws:s3:*:*:mys3-bucket/*To allow an IAM user or role to subscribe and publish to an Amazon SNS topicBy default, only the topic owner can publish or subscribe to a topic. To allow other IAM entities to subscribe and publish to your topic, your topic's identity-based policy must grant the required permissions.Important: Make sure that neither the IAM entity's policy nor the SNS topic's access policy explicitly denies access to the SNS resource. For more information, see The difference between explicit and implicit denies.If the IAM entity and the SNS topic are in different AWS accountsDo both of the following:1. Attach an IAM policy statement to the IAM entity that allows the entity to run the "sns:Subscribe" and "sns:Publish" actions. For instructions, see Adding and removing IAM identity permissions.The following is an example IAM identity-based policy that allows an IAM entity to subscribe and publish to an SNS topic:{ "Statement": [ { "Effect": "Allow", "Action": [ "sns:Publish", "sns:Subscribe" ], "Resource": "arn:aws:sns:your_region:123456789012:YourTopicName" } ]}2. Attach an SNS policy statement to your topic's access policy that allows the IAM entity to run the "sns:Subscribe" and "sns:Publish" actions. For instructions, see How do I edit my Amazon SNS topic's access policy?The following is an example Amazon SNS topic access policy that allows an IAM entity to subscribe and publish to an SNS topic:{ "Statement": [ { "Sid": "Allow-SNS-Permission", "Effect": "Allow", "Principal": { "AWS": "111122223333" }, "Action": [ "sns:Publish", "sns:Subscribe" ], "Resource": "arn:aws:sns:your_region:123456789012:YourTopicName" } ]}Note: The Principal can be an IAM identity-based user or role, or AWS account number. For more information, see AWS JSON policy elements: Principal.If the IAM entity and the SNS topic are in the same accountDo either of the following, but not both:Attach an IAM policy statement to the IAM entity that allows the entity to run the "sns:Subscribe" and "sns:Publish" actions.-or-Attach an SNS policy statement to your topic's access policy that allows the IAM entity to run the "sns:Subscribe" and "sns:Publish" actions.For example policy statements, see the If the IAM entity and the SNS topic are in different AWS accounts section of this article.(For topics with server-side encryption (SSE) activated) Confirm that your topic has the required AWS Key Management (AWS KMS) permissionsIf your topic has SSE activated, then your Amazon SNS topic must use an AWS KMS key that is customer managed. This KMS key must include a custom key policy that gives other AWS services sufficient key usage permissions.The following permissions are the minimum requirements:"kms:Decrypt""kms:GenerateDataKey*"To set up the required AWS KMS permissions, do the following:1. Create a new KMS key that is customer managed and includes the required permissions for the other AWS service.2. Configure SSE for your Amazon SNS topic using the custom KMS key you just created.3. Configure AWS KMS permissions that allow the other AWS service to publish messages to your encrypted topic.Example IAM policy statement that allows another AWS service to publish messages to an encrypted SNS topicImportant: Replace <service> with the AWS service.{ "Sid": "Allow-a-service-to-use-this-key", "Effect": "Allow", "Principal": { "Service": "<service>.amazonaws.com" }, "Action": [ "kms:Decrypt", "kms:GenerateDataKey*" ], "Resource": "*"}Related informationIAM policy for a destination SNS topicGetting started with AWS Cost Anomaly DetectionSubscribing an Amazon SQS queue to an Amazon SNS topicUsing resource-based policies for AWS LambdaFollow" | https://repost.aws/knowledge-center/sns-publish-subscribe-troubleshooting |
How can I enable audit logging for an Amazon RDS for MySQL or MariaDB instance and publish the logs to CloudWatch? | "I want to audit database (DB) activity to meet compliance requirements for my Amazon Relational Database Service (Amazon RDS) DB instance that's running MySQL or MariaDB. Then, I want to publish the DB logs to Amazon CloudWatch. How can I do this?" | "I want to audit database (DB) activity to meet compliance requirements for my Amazon Relational Database Service (Amazon RDS) DB instance that's running MySQL or MariaDB. Then, I want to publish the DB logs to Amazon CloudWatch. How can I do this?Short descriptionTo use the MariaDB Audit Plugin to capture events such as connections, disconnections, queries, or tables queried, you must do the following:Add and configure the MariaDB Audit Plugin and associate the DB instance with a custom option group.Publish the logs to CloudWatch.If you use Amazon Aurora MySQL-Compatible Edition, see How can I enable audit logging for my Aurora MySQL-Compatible DB cluster and publish the logs to CloudWatch?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, confirm that you're running a recent version of the AWS CLI.Amazon RDS supports Audit Plugin option settings on the following versions for MySQL and MariaDB:All MySQL 5.7 versionsMySQL 5.7.16 and higher 5.7 versionsMySQL 8.0.25 and higher 8.0 versionsMariaDB 10.2 and higherFor more information about supported versions, see MariaDB Audit Plugin support and Options for MariaDB database engines.Add and configure the MariaDB Audit Plugin and associate the DB instance with a custom option group1. Create a custom option group or modify an existing custom option group.2. Add the MariaDB Audit Plugin option to the option group and configure the option settings.3. Apply the option group to the DB instance.To apply the option to a new DB instance, configure the instance to use the newly created option group when you launch the DB instance. To apply the option to an existing DB instance, modify the DB instance and attach the new option group. For more information, see Modifying an Amazon RDS DB instance.After you configure the DB instance with the MariaDB Audit Plugin, you don't need to reboot the DB instance. When the option group is active, auditing begins immediately.Note: Amazon RDS doesn't support turning off logging in the MariaDB Audit Plugin. To turn off audit logging, remove the plugin from the associated option group. This restarts the instance automatically. To limit the length of the query string in a record, use the SERVER_AUDIT_QUERY_LOG_LIMIT option.Publish audit logs to CloudWatch1. Open the Amazon RDS console.2. Choose Databases from the navigation pane.3. Select the DB instance that you want to use to export log data to CloudWatch.4. Choose Modify.5. From the Log exports section, select Audit log.6. Choose Continue.7. Review the Summary of modifications, and then choose Modify instance.You can also use the following AWS CLI command syntax to turn on CloudWatch log exports:aws rds modify-db-instance --db-instance-identifier <mydbinstance> --cloudwatch-logs-export-configuration '{"EnableLogTypes":["audit"]}'After turning on audit logging and modifying your instance to export logs, events that are recorded in audit logs are sent to CloudWatch. Then, you can monitor the log events in CloudWatch.Follow" | https://repost.aws/knowledge-center/advanced-audit-rds-mysql-cloudwatch |
Can I restrict the access of IAM Identity to specific Amazon EC2 resources? | I want to restrict access of an AWS Identity and Access Management (IAM) user/group/role to a specific Amazon Elastic Compute Cloud (Amazon EC2) resource on the same account. How can I do this? | "I want to restrict access of an AWS Identity and Access Management (IAM) user/group/role to a specific Amazon Elastic Compute Cloud (Amazon EC2) resource on the same account. How can I do this?ResolutionAmazon EC2 has partial support for resource-level permissions or conditions. This means that for certain Amazon EC2 actions, you can control when users are allowed to use those actions based on conditions that have to be fulfilled, or specific resources that users are allowed to use.Isolating IAM users or groups of user's access to Amazon EC2 resources by any criteria other than AWS Region doesn't fit most use cases. If you must isolate your resources by Region or any conditions on the same account, be sure to check the list of Amazon EC2 actions that support resource-level permissions and conditions to verify that your use case is supported.Below is an example of a policy that can be used to restrict access of an IAM identity (user/group/role) to only Start/Stop/Reboot EC2 instances in the N. Virginia (us-east-1) Region. The instance must have a tag key of "Owner" with a tag value of "Bob." "ec2:Describe*" is added to the policy to grant permission to describe the EC2 instance and all associated resources in the AWS management EC2 console.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:Describe*", "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:StartInstances", "ec2:StopInstances", "ec2:RebootInstances" ], "Resource": [ "arn:aws:ec2:us-east-1:111122223333:instance/*" ], "Condition": { "StringEquals": { "ec2:ResourceTag/Owner": "Bob" } } } ]}Note: Replace "Owner," "Bob," and the resource ARN with parameters from your environment.After creating the policy, you can attach it to either an IAM user, group, or roleFor tagging use cases and best practices, see Best practices.Related informationIAM policies for Amazon EC2Identity and access management for Amazon EC2Amazon EC2 API actionsAmazon Resource Names (ARNs)Follow" | https://repost.aws/knowledge-center/restrict-ec2-iam |
How can I use CloudWatch Logs Insights queries with my VPC flow log? | I want to use Amazon CloudWatch Logs Insights queries to process my Amazon Virtual Private Cloud (Amazon VPC) flow logs that are in a log group. How can I do this? | "I want to use Amazon CloudWatch Logs Insights queries to process my Amazon Virtual Private Cloud (Amazon VPC) flow logs that are in a log group. How can I do this?Short descriptionAfter you turn on VPC flow logs targeting CloudWatch Logs, you see one log stream for each elastic network interface. CloudWatch Logs Insights is a query tool that can perform complex queries on log events stored in log groups. If an issue occurs, you can use CloudWatch Logs Insights to identify potential causes and validate deployed fixes.For information on supported log types, see Supported logs and discovered fields.ResolutionRun a queryTo run a query, do the following:1. Open the Cloudwatch console.2. Select Logs, Logs Insights.3. On the Logs Insights dashboard, select the log group that you want to analyze and visualize data for.4. You can create a query, or you can run one of the provided sample queries for VPC flow logs. If you're creating a custom query, start by reviewing the tutorials provided in the Amazon CloudWatch documentation. For information on query syntax, see CloudWatch Logs Insights query syntax.5. Select History to view your previously executed queries. You can run queries again from History.6. To export your results, select Export results and then choose a format.Example queriesScenario 1You have a webserver, application server, and DB server. The application isn't working as expected. For example, you're receiving a timeout or HTTP 503 error and you're trying to determine the cause of the error.Example variables:Action is set to "REJECT" so that only rejected connections are returned.The query includes only internal networks.The list of server IPs shows both inbound and outbound connections (srcAddr and dstAddr).The Limit is set to 5 so that only the first five entries are shown.Web server IP: 10.0.0.4App server IP: 10.0.0.5DB server IP: 10.0.0.6filter(action="REJECT" anddstAddr like /^(10\.|192\.168\.)/andsrcAddrlike /^(10\.|192\.168\.)/ and(srcAddr = "10.0.0.4" ordstAddr = "10.0.0.4" orsrcAddr = "10.0.0.5" ordstAddr = "10.0.0.5" orsrcAddr = "10.0.0.6" ordstAddr = "10.0.0.6" or))|stats count(*) as records by srcAddr,dstAddr,dstPort,protocol |sort records desc |limit 5Scenario 2You're experiencing intermittent timeouts on a given elastic network interface. The following query checks for any rejects on the elastic network interface over a period of time.fields @timestamp, interfaceId, srcAddr, dstAddr, action| filter (interfaceId = 'eni-05012345abcd' and action = 'REJECT')| sort @timestamp desc| limit 5Scenario 3The following query example analyzes VPC flow logs to produce a report on a specific elastic network interface. The query checks the amount of traffic that's being sent to different ports.fields @timestamp, @message | stats count(*) as records by dstPort, srcAddr, dstAddr as Destination | filter interfaceId="eni-05012345abcd" | filter dstPort="80" or dstPort="443" or dstPort="22" or dstPort="25" | sort HitCount desc | limit 10Scenario 4The following query filters VPC flow logs to list IP addresses that are trying to connect with a specific IP or CIDR in your VPC.For a specific IP:fields @timestamp, srcAddr, dstAddr | sort @timestamp desc | limit 5 | filter srcAddr like "172.31."For a specific CIDR:fields @timestamp, srcAddr, dstAddr | sort @timestamp desc | limit 5 | filter isIpv4InSubnet(srcAddr,"172.31.0.0/16)Note: For additional example queries, see Sample queries.Related informationAnalyzing log data with CloudWatch Logs InsightsFollow" | https://repost.aws/knowledge-center/vpc-flow-logs-and-cloudwatch-logs-insights |
How do I resolve the error that I receive in AWS CodeDeploy when my deployment times out while waiting for a status callback? | My AWS CodeDeploy deployment times out and returns the following error: "The deployment timed out while waiting for a status callback. CodeDeploy expects a status callback within one hour after a deployment hook is invoked." | "My AWS CodeDeploy deployment times out and returns the following error: "The deployment timed out while waiting for a status callback. CodeDeploy expects a status callback within one hour after a deployment hook is invoked."Short descriptionThis issue can occur when you use CodeDeploy to deploy an Amazon Elastic Container Service (Amazon ECS) service with a validation test.If the test doesn't return a Succeeded or Failed response within 60 minutes after a lifecycle event hook is invoked, then CodeDeploy returns the following error:"The deployment timed out while waiting for a status callback. CodeDeploy expects a status callback within one hour after a deployment hook is invoked."Note: The default timeout limit for a lifecycle hook AWS Lambda function's status callback is 60 minutes.To resolve the error, verify that the lifecycle hook Lambda function has the required method and AWS Identity and Access Management (IAM) permissions.ResolutionConfirm the cause of the error by reviewing your CloudWatch LogsFor instructions, see How do I retrieve log data from Amazon CloudWatch Logs?Verify that the lifecycle hook Lambda function has the required IAM permissionsMake sure that the lifecycle hook Lambda function has an execution role that includes the following permission: PutLifecycleEventHookExecutionStatus.Note: The PutLifecycleEventHookExecutionStatus permission isn't included by default in the AWS managed CodeDeployFullAccess IAM policy.See the following example of a PutLifecycleEventHookExecutionStatus permissions statement:{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "codedeploy:PutLifecycleEventHookExecutionStatus", "Resource": "*" } ]}Verify that the lifecycle hook Lambda function includes the required method to return a status response to CodeDeployMake sure that the lifecycle hook Lambda function includes the putLifecycleEventHookExecutionStatus method.For more information, see Step 3: Create a lifecycle hook Lambda function in the CodeDeploy User Guide.See the following example of a putLifecycleEventHookExecutionStatus method for a lifecycle hook Lambda function:codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) { if (err) { // Validation failed. console.log('AfterAllowTestTraffic validation tests failed'); console.log(err, err.stack); callback("CodeDeploy Status update failed"); } else { // Validation succeeded. console.log("AfterAllowTestTraffic validation tests succeeded");callback(null, "AfterAllowTestTraffic validation tests succeeded"); }Follow" | https://repost.aws/knowledge-center/codedeploy-deployment-timeout-errors |
What do I need to know about the Amazon RDS maintenance window? | I want to know what happens during the Amazon Relational Database Service (Amazon RDS) maintenance window. I want to know the pending maintenance actions and defer these maintenance actions accordingly. | "I want to know what happens during the Amazon Relational Database Service (Amazon RDS) maintenance window. I want to know the pending maintenance actions and defer these maintenance actions accordingly.ResolutionAmazon RDS performs maintenance on Amazon RDS resources periodically to fix issues related to security and instance reliability. During the maintenance window, Amazon RDS applies updates related to hardware, underlying operating system, or database engine minor version. In addition, DB instance modifications that you've chosen not to apply immediately are also applied during the maintenance window. Some of these maintenance operations, such as operating system updates and database patching, cause downtime on your RDS instance. Enabling the Multi-AZ configuration on your RDS instance might help in minimizing the downtime required during some maintenance operations.Get notifications for maintenance actionsTo configure notifications for upcoming maintenance actions on your RDS instance, do the following:Create an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications from Personal Health Dashboard.Create an Amazon CloudWatch Events rule to be notified for AWS Health events related to RDS resources in your account.Use the Amazon RDS event notification to be notified for instance events in the maintenance category. You can also subscribe to Amazon RDS event notification.To create the CloudWatch Events rule to get notifications for Amazon RDS maintenance actions, do the following:Open the Amazon CloudWatch console.In the navigation pane, under Events, choose Rules.Choose Back to CloudWatch Events.Choose Create rule.Under Event Source, do the following:For Service Name, choose Health.For Event Type, choose Specific Health events.Select Specific service(s).For Specific service(s), select RDS.Select Specific event type category(s).For Specific event type category(s), select scheduledChange.Select Any event type code.Select Any resource.Under Targets, do the following:Choose Add target*, and then select SNS topic.For Topic*, select the Amazon SNS topic that you created for notifying Amazon RDS maintenance actions.Choose Configure details.Under Rule definition, do the following:For Name*, enter the name for the rule.For Description, enter the description for the rule.Choose Create rule.Note: To see the Amazon RDS DB instances that are scheduled to receive hardware maintenance during your maintenance window, review the DB instances that are listed in the Open and recent issues tab on your AWS Health Dashboard. For more information, see the maintenance notification email that's sent to your account.List pending maintenance actionsTo view whether a maintenance update is available for your DB instance, do the following:Open the Amazon RDS console.In the navigation pane, choose Databases.Choose the settings icon.Under Preferences, turn on Maintenance, and then choose Continue.You can see the maintenance updates for your DB instance with one of the following column values:required: The maintenance action will be applied to the resource and can't be deferred indefinitely.available: The maintenance action is available, but won't be applied to the resource automatically. You can apply it manually.next window: The maintenance action will be applied to the resource during the next maintenance window.In progress: The maintenance action is in the process of being applied to the resource.To view the maintenance actions for the RDS instance, do the following:Open the Amazon RDS console.In the navigation pane, choose Databases.Select the DB instance that you want to view.Choose the Maintenance & backups tab.You can view the list of pending maintenance actions under the Pending maintenance section.You can also run the following AWS Command Line Interface (AWS CLI) command to list pending maintenance actions:$ aws rds describe-pending-maintenance-actions --region example-region-nameNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Change the maintenance windowThe maintenance window should fall at the time of lowest usage and thus might need modification from time to time. To change the maintenance window to a preferred time, see Adjusting the preferred DB instance maintenance window.Changing the maintenance window for an RDS instance doesn’t require any downtime. However, if there are one or more pending actions that cause downtime, and the maintenance window is changed to include the current time, then the pending actions are applied immediately, resulting in downtime.To postpone a maintenance action that's scheduled for the next maintenance window, consider changing the maintenance window of your DB instance to the next feasible window.Important: Changing the maintenance window continuously to avoid downtime might lead to the maintenance actions being applied at the time of highest usage. This might cause an outage.Defer maintenance actionsYou can't defer a maintenance action that has already started. However, you can defer a maintenance action that's scheduled for the next maintenance window. If you set the Maintenance value to next window, then the option to defer is available:Open the Amazon RDS console.In the navigation pane, choose Databases.Choose the DB instance for which you want to defer the maintenance action.Choose Actions, and then choose Defer upgrade.Related informationMaintaining a DB instanceModifying an Amazon RDS DB instanceHow do I minimize downtime during required Amazon RDS maintenance?How long is the Amazon RDS maintenance window?How do I configure notifications for Amazon RDS or Amazon Redshift maintenance windows?Follow" | https://repost.aws/knowledge-center/rds-maintenance-window |
How can I allow only certain file types to be uploaded to my Amazon S3 bucket? | I want only certain file types to be stored on my Amazon Simple Storage Service (Amazon S3) bucket. How can I limit uploads so that my bucket accepts only those file types? | "I want only certain file types to be stored on my Amazon Simple Storage Service (Amazon S3) bucket. How can I limit uploads so that my bucket accepts only those file types?ResolutionAdd statements to your bucket policy that do the following:Allow the s3:PutObject action only for objects that have the extension of the file type that you want.Explicitly deny the s3:PutObject action for objects that don't have the extension of the file type that you want.Note: This explicit deny statement applies the file-type requirement to users with full access to your Amazon S3 resources.For example, the following bucket policy allows the s3:PutObject action to exampleuser only for objects with .jpg, .png, or .gif file extensions:Warning: This example bucket policy includes an explicit deny statement. If a user doesn't meet the specified conditions, even the user who enters the bucket policy can get denied access to the bucket. Therefore, you must carefully review the bucket policy before saving it. If you've accidentally locked the bucket, then see I accidentally denied everyone access to my Amazon S3 bucket. How do I regain access?{ "Version": "2012-10-17", "Id": "Policy1464968545158", "Statement": [ { "Sid": "Stmt1464968483619", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111111111111:user/exampleuser" }, "Action": "s3:PutObject", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*.jpg", "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*.png", "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*.gif" ] }, { "Sid": "Stmt1464968483619", "Effect": "Deny", "Principal": "*", "Action": "s3:PutObject", "NotResource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*.jpg", "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*.png", "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*.gif" ] } ]}Note:For the first Principal value, list the Amazon Resource Names (ARNs) of the users that you want to grant upload permissions to.For the Resource and NotResource values, make sure to replace DOC-EXAMPLE-BUCKET with the name of your bucket.When you specify resources in the bucket policy, the bucket policy evaluation is case-sensitive. A bucket policy that denies s3:PutObject actions for NotResource "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*.jpg" will allow you to upload "my_image.jpg". However, if you try to upload "my_image.JPG", Amazon S3 will return an Access Denied error.Follow" | https://repost.aws/knowledge-center/s3-allow-certain-file-types |