I have an EC2 Instance with Ubuntu. I used sudo ufw enable
and after only allow the mongodb port
sudo ufw allow 27017
When the ssh connection broke, I can´t reconnect
If your instance is a managed instance in AWS Systems Manager, then use the AWSSupport-ResetAccess document to recover your lost key pair. AWSSupportResetAccess automatically generates and adds a new SSH (public/private) key pair using the EC2 Rescue for Linux tool on the specified EC2 instance.
This error occurs if you created a password for your key file, but haven't manually entered the password. To resolve this error, enter the password or use ssh-agent to load the key automatically. There are a number of reasons why you might get an SSH error, like Resource temporarily unavailable.
# Update
Easiest way is to update the instance's user data
Stop your instance
Right click (windows) or ctrl + click (Mac) on the instance to open context menu, then go to Instance Settings
-> Edit User Data
or select the instance and go to Actions
-> Instance Settings
-> Edit User Data
If you're still on the old AWS console, select the instance, go to Actions
-> Instance Settings
-> View/Change User Data
And paste this
Content-Type: multipart/mixed; boundary="//" MIME-Version: 1.0 --// Content-Type: text/cloud-config; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="cloud-config.txt" #cloud-config cloud_final_modules: - [scripts-user, always] --// Content-Type: text/x-shellscript; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="userdata.txt" #!/bin/bash ufw disable iptables -L iptables -F --//
Source here
# Old Answer
Detach and fix the volume of the problem instance using another instance
Launch a new instance (recovery instance).
Stop the original instance (DO NOT TERMINATE)
Detach the volume (problem volume) from the original instance
Attached it to the recovery instance as /dev/sdf.
Login to the recovery instance via ssh/putty
Run sudo lsblk
to display attached volumes and confirm the name of the problem volume. It usually begins with /dev/xvdf
. Mine is /dev/xvdf1
Mount problem volume.
$ sudo mount /dev/xvdf1 /mnt $ cd /mnt/etc/ufw
Open ufw
configuration file
$ sudo vim ufw.conf
Press i to edit the file.
Change ENABLED=yes
to ENABLED=no
Type Ctrl-C and type :wq to save the file.
Display content of ufw conf file using the command below and ensure that ENABLED=yes
has been changed to ENABLED=no
$ sudo cat ufw.conf
Unmount volume
$ cd ~ $ sudo umount /mnt
Detach problem volume from recovery instance and re-attach it to the original instance as /dev/sda1.
Start the original instance and you should be able to log back in.
Source: here
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With