Installing Elastic Search 1.4.4


I am currently trialling the new version of Sugar Crm 7.6 and the installation now requires an elastic search server.

I instantly went to AWS Elastic search but trying to access it is a bit of a pain, more on that later.

To install elastic search you need to install Java, I am installing this on AWS of course so there are few extra bits and pieces.

 

To start with uninstall OpenJava.

rpm -qa | grep Java

tzdata-java-2015g-1.35.amzn1.noarch
javapackages-tools-0.9.1-1.5.amzn1.noarch
java-1.7.0-openjdk-1.7.0.91-2.6.2.2.63.amzn1.x86_64
Will give you the Java package installed

yum erase java-1.7.0-openjdk

Download the JDK from the oracle site and transfer it to the server

rpm -i jdk-8u65-linux-x64.rpm

Now we want elastic search

Sugar CRM will only work with 1.44 so download this version

mkdir /opt/software
cd /opt/software
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.4.4.noarch.rpm
rpm -i elasticsearch-1.4.4.noarch.rpm

Now to do some configuration

vi /etc/elasticsearch/elasticsearch.yml

Change the following areas

 

################################### Cluster ###################################

# Cluster name identifies your cluster for auto-discovery. If you’re running
# multiple clusters on the same network, make sure you’re using unique names.
#
#cluster.name: elasticsearch
cluster.name: elasticsearch

#################################### Node #####################################

# Node names are generated dynamically on startup, so you’re relieved
# from configuring them manually. You can tie this node to a specific name:
#
#node.name: “Franz Kafka”
node.name: “node1”

#################################### Node #####################################

# Node names are generated dynamically on startup, so you’re relieved
# from configuring them manually. You can tie this node to a specific name:
#
#node.name: “Franz Kafka”
node.name: “node1”

# Every node can be configured to allow or deny being eligible as the master,
# and to allow or deny to store the data.
#
# Allow this node to be eligible as a master node (enabled by default):
#
#node.master: true
node.master: true

#
# Allow this node to store data (enabled by default):
#
#node.data: true
node.data: true

#################################### Paths ####################################

# Path to directory containing configuration (this file and logging.yml):
#
#path.conf: /path/to/conf
path.conf: /etc/elasticsearch

# Path to directory where to store index data allocated for this node.
#
#path.data: /path/to/data
path.data: /opt/elasticsearch/data

#
# Can optionally include more than one location, causing data to be striped across
# the locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
#
#path.data: /path/to/data1,/path/to/data2

# Path to temporary files:
#
#path.work: /path/to/work
path.work: /opt/elasticsearch/work

# Path to log files:
#
#path.logs: /path/to/logs
path.logs: /opt/elasticsearch/logs

 

############################## Network And HTTP ###############################

# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).

# Set the bind address specifically (IPv4 or IPv6):
#
#network.bind_host: 192.168.0.1

# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
#network.publish_host: 192.168.0.1

# Set both ‘bind_host’ and ‘publish_host’:
#
#network.host: 192.168.0.1

# Set a custom port for the node to node communication (9300 by default):
#
#transport.tcp.port: 9300

# Enable compression for all communication between nodes (disabled by default):
#
#transport.tcp.compress: true

# Set a custom port to listen for HTTP traffic:
#
#http.port: 9200
http.port: 9200

 

You need to create the directory structure that you have created above

 mkdir -p /opt/elasticsearch/work
 mkdir -p /opt/elasticsearch/logs
 mkdir -p /opt/elasticsearch/data
chown -R elasticsearch:elasticsearch /opt/elasticsearch

If you try and start the service you will get the following error :-

service elasticsearch start

elasticsearch: Failed to configure logging…
elasticsearch: org.elasticsearch.ElasticsearchException: Failed to load logging configuration
elasticsearch: at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:135)
elasticsearch: at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:85)
elasticsearch: at org.elasticsearch.bootstrap.Bootstrap.setupLogging(Bootstrap.java:94)
elasticsearch: at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:178)
elasticsearch: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
elasticsearch: Caused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config
elasticsearch: at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
elasticsearch: at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
elasticsearch: at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
elasticsearch: at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
elasticsearch: at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
elasticsearch: at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
elasticsearch: at java.nio.file.Files.readAttributes(Files.java:1737)
elasticsearch: at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:225)
elasticsearch: at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
elasticsearch: at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
elasticsearch: at java.nio.file.Files.walkFileTree(Files.java:2662)
elasticsearch: at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:119)
elasticsearch: … 4 more
elasticsearch: log4j:WARN No appenders could be found for logger (bootstrap).
elasticsearch: log4j:WARN Please initialize the log4j system properly.
elasticsearch: log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
elasticsearch.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
Unit elasticsearch.service entered failed state.

To fix this

mkdir -p /usr/share/elasticsearch/logs
mkdir -p /usr/share/elasticsearch/config
mkdir -p /usr/share/elasticsearch/data
cp /etc/elasticsearch/logging.yml /usr/share/elasticsearch/config/
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/logs
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/config
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data

To test the connection

curl -X POST 'http://FQDN:9200/tutorial/helloworld/1' -d '{ "message": "Hello World!" }'

Getting the record

curl -X GET 'http://FQDN:9200/tutorial/helloworld/1'

 

Note that is you have configured this in AWS you must use the internal ip address of the server or FQDN that resolves to the internal address.

 

Advertisements

One Comment on “Installing Elastic Search 1.4.4”

  1. myerrorblog says:

    […] Posted: December 17, 2015 | Author: survivalguides | Filed under: Linux | Tags: AWS, ElasticSearch, SugarCRM|Leave a comment […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s