Problem to login to ELK stack at AWS

Hello,

I just installed an ELK stack at AWS from the marketplace.

AMI ID: bitnami-elk-5.2.2-0-linux-ubuntu-14.04.3-x86_64-hvm-ebs-mp-66adbbee-28cd-4eb1-88c0-823e55d54085-ami-fa8523ec.4 (ami-41d8092e)

I am able to login with the ssh console, but not with the web browser.
I followed the instructions to retrieve the initial password from the system log.

https://192.168.1.2/bitnami --> https://192.168.1.2/elk

I tried to logon with ‘user’ and the random password, but without success.

I tried to reset the password with the Bitnami configuration file, but the option --userpass is not valid:
Bitnami Config Tool — Built on 2017-03-07 10:02:25 IB: 17.1.0-201702230302

What can I check else ?

bye, Stefan

Problem is solved.

I reinstalled the ELK stack a second time

And surprise: It works.

The only thing what I changed was the type of the AWS instance: from t2.micro to t2.medium

bye, Stefan

Hi @daaho,

We are glad you solved the issue. Take into account ELK runs Java, and these applications have higher memory requirements, so a micro instance should not be enough for that.

Please don’t hesitate to ask any doubt you have.

Regards

I am having the same problem, but with the Elasticsearch-head login. I tried to login with user and the password you can get by typing cat /home/bitnami/bitnami_credentials, and I get:

Unauthorized

This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn’t understand how to supply the credentials required.

I also tried using the Bitnami user and the password stored in the installation logfile that is saved for 24 hours. I know this is the user for SSH but I thought I’d give it a try.

I also changed the LINUX password for the user Elasticsearch and tried to login as Elasticsearch…

Is the first account, “user” the correct one to use for Elastisearch-head?

TIA dry

Hello @dryanhawley,

This seems to be related to Elasticsearch-head login.

In order to avoid mixing topics, could you please open a new ticket with all the information? We have a Support Tool that will gather relevant information for us to analyze your configuration and logs. Could you please execute it on the machine where the stack is running by following the steps described in the guide below?

Please note that you need to paste the code ID that is shown at the end.

Regards

DavidG,

we modified the /opt/bitnami/elasticsearch/config/elasticsearch.yml to create a master/slave relationship between two Bitnami ELK stacks, and it broke the stack. Now we need a little
help to create a functioning elasticsearch.yml file. Here are the lines we changed:

cluster:
name: bnCluster
initial_master_nodes:
ip-172-31-30-159

and

node:
name: ip-172-31-30-159
master: true ( only this line changed)

discovery: seed_hosts:
name:
ip-172-31-30-159
ip-172-31-30-77 ( adding the slave cluster’s IP only this line changed)

and on the intended slave cluster we changed the “initial_master_nodes” to the hostname of the
intended master.

When we restarted all services we got the following message:

root@ip-172-31-81-159:~# /opt/bitnami/ctlscript.sh restart
Restarting services…
Job for bitnami.service failed because the control process exited with error code.
See “systemctl status bitnami.service” and “journalctl -xe” for details.
root@ip-172-31-81-159:~# systemctl status bitnami.service
● bitnami.service - LSB: bitnami init script
Loaded: loaded (/etc/init.d/bitnami; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2021-05-25 14:55:25 UTC; 10s ago
Process: 736 ExecStart=/etc/init.d/bitnami start (code=exited, status=1/FAILURE)

May 25 14:55:25 ip-172-31-81-159 bitnami[736]: elasticsearch 14:55:25.04 ERROR ==> An error occurred when starting elasticsearch
May 25 14:55:25 ip-172-31-81-159 bitnami[736]: 2021-05-25T14:55:25.054Z - error: Unable to perform start operation Export start for elasticsearch failed with exit code 1
May 25 14:55:25 ip-172-31-81-159 bitnami[736]: ## 2021-05-25 14:55:25+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/010_bitnami_agent_extra…
May 25 14:55:25 ip-172-31-81-159 bitnami[736]: ## 2021-05-25 14:55:25+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/020_bitnami_agent…
May 25 14:55:25 ip-172-31-81-159 bitnami[736]: ## 2021-05-25 14:55:25+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/030_update_welcome_file…
May 25 14:55:25 ip-172-31-81-159 bitnami[736]: ## 2021-05-25 14:55:25+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/040_bitnami_credentials_file…
May 25 14:55:25 ip-172-31-81-159 bitnami[736]: ## 2021-05-25 14:55:25+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/050_clean_metadata…
May 25 14:55:25 ip-172-31-81-159 systemd[1]: bitnami.service: Control process exited, code=exited, status=1/FAILURE
May 25 14:55:25 ip-172-31-81-159 systemd[1]: bitnami.service: Failed with result ‘exit-code’.
May 25 14:55:25 ip-172-31-81-159 systemd[1]: Failed to start LSB: bitnami init script.

PS I will run the support tool on the slave cluster as well. BTW our goal is to duplicate data nodes in different availability zones. Please advise on the best way to do this. We were thinking once this works to create an AWS AMI of the slave node and deploy that into different availability zones. Is
this how you’d recommend doing it?


previous topic: we still want to know what user to use to login to Elasticsearch-head

We thought the correct user should be “user” and the password stored in /home/bitnami

Thanks, David

"Server ports 22, 80 and/or 443 are not publicly accessible. "

The Bitnami Support Tool reported this. These should not really be made publicly available correct?
Is it just a warning, they are open to the IP’s that need to use the stack.

Hello @dryanhawley

Could you share the output of the support tool with us so we can check the results?

Regards

David. I sent the output of the support tool for both stacks I’m trying to set up as master and slave. Today I created a fresh instance so I can continue to work… And the only change I made on the box was to add this to the last line of the /opt/bitnami/elasticsearch/config/elasticsearch.yml file:
pack.security.enabled true

and I restarted the services using

/opt/binname/ctlscript.sh restart

This produces the same error we see above.

systemctl status bitnami service. confirms the same error. And the stack is down so I can no longer connect to Kibana.

I made a typo above, there was an “x” in front of “pack.security.enabled true”

So the last line of elasticsearch.yml actually says

xpack.security.enabled true

This seems like the wrong response to adding xpack security to the stack.

Hi @dryanhawley,

I’m not sure if you finally solved the issues, did you?

If not, please note you need to copy here the code ID generated by the support tool so we can access it.

Regards

DavidG, I don’t remember seeing a coded, I’ll run it again and look for it. I made some changes to elastisearch.yml as described here:

https://docs.bitnami.com/general/apps/elk/administration/add-nodes/

Again our goal is to have multiple redundant Clusters that share data, remote data nodes. And when we followed the steps at elastic.co it crashed the elastic stack. Today we tried using the steps above, at your site and when it a browser we are getting Kibana not ready on both servers. Maybe they are updating each other? But it has been going on for over an hour.

Here are the two codes, one for each elastic server that we are trying to get to share data.

d9bcfe81-d34e-3b0a-c825-c65aa7457ad1

cb0cc169-be11-058b-7c6e-ae01d835c289

Hi @dryanhawley!

I will be happy to help you configure replication for Elasticsearch.

First of all, 2 nodes won’t be enough, as the minimum recommended for an elasticsearch cluster is 3 nodes.

Requirements:

  • Deploy 1 Bitnami ELK instance.
  • Deploy 2 Bitnami Elasticsearch instances.

Once you have the 3 instances running, configure you will have to do the following:

Configure your ELK instance with the following settings:

  • Configure elasticsearch.yml with your cluster settings:
http:
  port: 9200
path:
  data: /bitnami/elasticsearch/data
transport:
  tcp:
    port: 9300
action:
  destructive_requires_name: true
network:
  host: <current_instance_private_ip>
  publish_host: <current_instance_private_ip>
  bind_host: 
    - <current_instance_private_ip>
    - 127.0.0.1
cluster:
  name: <my_cluster_name>
  initial_master_nodes: 
    - <ip_instance_1>
    - <ip_instance_2>
    - <ip_instance_3>
node:
  name: <current_node_name>
  master: true
  data: true
  ingest: false
discovery:
  seed_hosts: 
    - <ip_instance_1>
    - <ip_instance_2>
    - <ip_instance_3>
  initial_state_timeout: 5m
gateway:
  recover_after_nodes: 1
  expected_nodes: 1
xpack:
  ml:
    enabled: false
  • Configure Kibana (/opt/bitnami/kibana/config/kibana.yml) to reach all 3 nodes:
path:
  data: /bitnami/kibana/data
logging:
  dest: /opt/bitnami/kibana/logs/kibana.log
pid:
  file: /opt/bitnami/kibana/tmp/kibana.pid
server:
  host: 0.0.0.0
  port: 5601
elasticsearch:
  hosts: 
    - http://<ip_instance_1>:9200
    - http://<ip_instance_2>:9200
    - http://<ip_instance_3>:9200
  • Configure Logstash to reach all 3 nodes (/opt/bitnami/logstash/config/logstash-sample.conf):
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://<ip_instance_1>:9200","http://<ip_instance_2>:9200","http://<ip_instance_3>:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}
  • Restart the services:
/opt/bitnami/ctlscript.sh restart

Once the first node is up and running, add the other two Elasticsearch nodes doing the following:

  • Stop the running Elasticsearch service in both Elasticsearch-only instances.
/opt/bitnami/ctlscript.sh stop
  • Remove the data from the node. This is required because the node had already started as a Standalone deployment.
rm -rf /bitnami/elasticsearch/data/*
  • Configure elasticsearch.yml with the following settings:
http:
  port: 9200
path:
  data: /bitnami/elasticsearch/data
transport:
  tcp:
    port: 9300
action:
  destructive_requires_name: true
network:
  host: <current_instance_private_ip>
  publish_host: <current_instance_private_ip>
  bind_host: 
    - <current_instance_private_ip>
    - 127.0.0.1
cluster:
  name: <my_cluster_name>
  initial_master_nodes: 
    - <ip_instance_1>
    - <ip_instance_2>
    - <ip_instance_3>
node:
  name: <current_node_name>
  master: true
  data: true
  ingest: false
discovery:
  seed_hosts: 
    - <ip_instance_1>
    - <ip_instance_2>
    - <ip_instance_3>
  initial_state_timeout: 5m
gateway:
  recover_after_nodes: 1
  expected_nodes: 1
xpack:
  ml:
    enabled: false
  • Restart the Elasticsearch service
/opt/bitnami/ctlscript.sh start

Once all the nodes have been restarted with the new configuration, you could check the cluster was formed by running the following command:

curl http://127.0.0.1:9200/_cluster/health?pretty

Expected output:

{
  "cluster_name" : "<my_cluster_name>",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

This would also work with 3 ELK instances, but as you probably don’t need 3 Kibanas and 3 Logstash using the Elasticsearch-only instance would be more optimal for performance as Elasticsearch won’t have to share resources.

David I ran through your steps several times, and the results are always the same, this error,
“/opt/bitnami/elasticsearch/config# ctlscript.sh start
Starting services…
Job for bitnami.service failed because the control process exited with error code.
See “systemctl status bitnami.service” and “journalctl -xe” for details.”

I will run the Bitnami Support Tool and send you the results.

systemctl status bitnami.service
● bitnami.service - LSB: bitnami init script
Loaded: loaded (/etc/init.d/bitnami; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2021-06-01 14:01:08 UTC; 6min ago
Process: 10824 ExecStart=/etc/init.d/bitnami start (code=exited, status=1/FAILURE)
Tasks: 58 (limit: 4915)
Memory: 1.2G
CGroup: /system.slice/bitnami.service
├─ 973 /opt/bitnami/gonit/bin/gonit
└─7235 /opt/bitnami/java/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitia

Jun 01 14:01:08 ip-NN.NN.NN bitnami[10824]: elasticsearch 14:01:08.39 ERROR ==> elasticsear
Jun 01 14:01:08 ip-NN.NN.NN bitnami[10824]: 2021-06-01T14:01:08.410Z - error: Unable to per
Jun 01 14:01:08 iip-NN.NN.NN bitnami[10824]: ## 2021-06-01 14:01:08+00:00 ## INFO ## Running
Jun 01 14:01:08 iip-NN.NN.NN bitnami[10824]: ## 2021-06-01 14:01:08+00:00 ## INFO ## Running
Jun 01 14:01:08 ip-NN.NN.NN bitnami[10824]: ## 2021-06-01 14:01:08+00:00 ## INFO ## Running
Jun 01 14:01:08 ip-NN.NN.NN bitnami[10824]: ## 2021-06-01 14:01:08+00:00 ## INFO ## Running
Jun 01 14:01:08 ip-NN.NN.NN bitnami[10824]: ## 2021-06-01 14:01:08+00:00 ## INFO ## Running
Jun 01 14:01:08 ip-NN.NN.NN systemd[1]: bitnami.service: Control process exited, code=exite
Jun 01 14:01:08ip-NN.NN.NN systemd[1]: bitnami.service: Failed with result ‘exit-code’.
Jun 01 14:01:08 ip-NN.NN.NNsystemd[1]: Failed to start LSB: bitnami init script.

4f9bc892-3038-f838-8348-7bc3f75e1541

David here is the code to the support tool output.

David, I’m 99.999999 % sure I configured elastisearch.yml as stated above. Where is says
“- <current_instance_private_ip>” I put the non-publically routable private subnet IP inplace
of everything between the quotes in the previous line. The “-” is not part of what I put in the elasticsearch.yml file correct? Neither are the “< >” right?

under network.host I put the private IP again, but I have seen other instructions where they put
“0.0.0.0” for network.host, network.publish.host and network.bind_host stating in the tutorial that “0.0.0.0”
would allow elasticsearch to work on all interfaces. I just want to be sure, because something is crashing everything I try.

Another question is about “initial_master_nodes” in another tutorial I watched they
put all 3 as a list, for example [“nn.nn.nn.nn”, “nn.nn.nn.nn”, “nn.nn.nn.nn”] where
the 3 NN.nn.nn.nn were the IP’s of all 3 nodes in the cluster. The other way to do this is
like in the example above where they appear on 3 lines below the item you are configuring.

here is the video I watched on YouTube, https://www.youtube.com/watch?v=0OIJaPhrblM

I don’t want to publish our actual elasticsearch.yml file publicly but if you fail me at dryanhawley@gmail.com I will email them to you, and never email you again with direct questions,

David

Hello @dryanhawley,

I can see there is a typo in your elasticsearch configuration:

cluster:
  name: bnCluster
  initial_master_nodes: ["172.XX.XX.XX6", "172.XX.XX.X4", ", "172.XX.XX.XX3"]

to

cluster:
  name: bnCluster
  initial_master_nodes: ["172.XX.XX.XX6", "172.XX.XX.X4", "172.XX.XX.XX3"]

Obviously, change those X placeholders with the right values.

Regards

That problem doesn’t exist on the ClusterMaster. It must have happen when I was obfuscating the IPs.

Here is how the ClusterMaster and the data-nodes look now;
http:
ClusterMaster elasticsearch.yml
port: 9200

path:

data: /bitnami/elasticsearch/data

transport:

tcp:

port: 9300

action:

destructive_requires_name: true

network:

host: [“127.0.0.1”, “172.31.80.99”]

publish_host: [“127.0.0.1”, “172.NN.NN.NN”]

bind_host: [“127.0.0.1”, “172.NN.NN.NN”]

cluster:

name: bnCluster

initial_master_nodes: ip-172-NN-NN-NN. # (<<< this has to be commented out after the first boot right… When I put all 3 servers it causes that error>>> comment only not in file)

node:

name: ip-172-NN-NN-NN

master: true

data: true

ingest: false

discovery:

seed_hosts: [“ip-172-NN-NN-NN”, “ip-172-NN-NN-NN”, “ip-172-NN-NN-NN”]

initial_state_timeout: 5m

gateway:

recover_after_nodes: 1

expected_nodes: 1

xpack:

ml:

enabled: false

I finally have it running without crashing elasticsearch

and on the data-nodes:

http:
port: 9200
path:
data: /bitnami/elasticsearch/data
transport:
tcp:
port: 9300
action:
destructive_requires_name: true
network:
host: [“127.0.0.1”, “172.NN.NN.NN”]
publish_host: [“127.0.0.1”, “172.NN.NN.NN”]
bind_host: [“127.0.0.1”, “172.NN.NN.NN”]
cluster:
name: bnCluster
initial_master_nodes: [“ip-172-NN-NN-NN”, “172.NN.NN.NN”, “172.NN.NN.NN”]
node:
name: ip-172-NN-NN-NN
master: true
data: true
ingest: false
discovery:
seed_hosts: [“172.NN.NN.NN”, “172.NN.NN.NN”, “172.NN.NN.NN”]
initial_state_timeout: 5m
gateway:
recover_after_nodes: 1
expected_nodes: 1
xpack:
ml:
enabled: false

I removed the nodes/* directory recursively, the path is slightly different from the instructions:

rm -rf /bitnami/elasticsearch/data/nodes/* the path in the instructions doesn’t have a nodes directory.

I hope we can get a little faster response as a lot changes every 24 hours… Also could you PLEASE answer the questions in the previous posts (if they are relevant, for example I already know there are no “-” or “<>” allowed), but should I put all 3 servers in the initial_master_nodes file?

In the instructions it never mentions removing the nodes/* directories so I didn’t, should I have removed it after making the changes, after the initial boot up?

#! Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.13/security-minimal-setup.html to enable security.
{
“index” : “metricbeat-7.13.0-2021.06.01-000001”,
“shard” : 0,
“primary” : false,
“current_state” : “unassigned”,
“unassigned_info” : {
“reason” : “CLUSTER_RECOVERED”,
“at” : “2021-06-02T14:42:57.690Z”,
“last_allocation_status” : “no_attempt”
},
“can_allocate” : “no”,
“allocate_explanation” : “cannot allocate because allocation is not permitted to any of the nodes”,
“node_allocation_decisions” : [
{
“node_id” : “o5pZc4w5Ther0PO51Bs6ag”,
“node_name” : “ip-172-NN-NN-NN”,
“transport_address” : “172.NN.NN.NN:9300”,
“node_attributes” : {
“xpack.installed” : “true”,
“transform.node” : “true”
},
“node_decision” : “no”,
“deciders” : [
{
“decider” : “same_shard”,
“decision” : “NO”,
“explanation” : “a copy of this shard is already allocated to this node [[metricbeat-7.13.0-2021.06.01-000001][0], node[o5pZc4w5Ther0PO51Bs6ag], [P], s[STARTED], a[id=Y4fQ4SgmSfyppG1jNkcLEQ]]”
}
]
}
]
}

Maybe this will help you diagnose. As you can see the ClusterMaster can’t allocate or decide.