Keywords: Tomcat - Amazon Web Services - Technical issue - Other
Our instances all originally derived from a Bitnami Apache Tomcat AMI. We’ve since continued to grow our own AMIs off of those servers but the underlying structure is still all original. We have been having a problem recently when running upgrades. It happens after we run an upgrade. It works fine after the upgrade until reboot and then it will fail the instance status check. After it fails the check the instance is unreachable.
I’ve seen this happen before and then was able to spawn a new image off an earlier snapshot of that instance. I ran upgrade again and watched exactly what it upgraded. I noticed the JDK upgraded. After looking at the JDK directory I found that the bitnami directory that contains the setenv.sh files was missing. I copied it back in and then restarted and then all was good, passed the status checks.
This time however I don’t have a recent copy of the problematic instance. So I’ve disconnected the instance volume and attached it to another one. I’m able to access the volume this way. I saw once again the bitnami folder was missing so I restored it. Then created a new AMI off that volume and launched a new instance. That instance however is also failing the instance status check.
So my question is, is there a better way to identify why these status checks fail? Are there bitnami logs I can find somewhere from the attempted start up to see what went wrong? Is there a tool I can run to analyze the volume?
Any help is greatly appreciated.