Installing s3fs on Centos

centos-logos3_amazon

Many years ago I install s3fs on Centos servers and wrote about it, today I needed to install it on a new server.

I went straight to the instructions and of course as in everything Open Source they were out of date. So here for the next few months are the new install instructions. ūüôā

cd ~
mkdir software
cd software
wget -O master.zip https://github.com/s3fs-fuse/s3fs-fuse/archive/master.zip 

Some prerequisites 

yum -y install automake libcurl gcc-c++ \
libcurl-devel libxml2 libxml2-devel libtool gettext gettext-devel \
openssl openssl-devel
 
unzip master.zip
cd s3fs-fuse-master
./autogen.sh 

 

ERROR

— Make commit hash file ——-
— Finished commit hash file —
— Start autotools ————-
./autogen.sh: 38: ./autogen.sh: aclocal: not found
— Finished autotools ———-

Ensure that you have installed automake

./configure --prefix=/usr

ERROR

checking whether the C++ compiler works… no
configure: error: in `/root/software/s3fs-fuse-master’:
configure: error: C++ compiler cannot create executables
See `config.log’ for more details

Ensure that you have installed  gcc-c++

No package ‘fuse’ found
No package ‘libcurl’ found
No package ‘libxml-2.0’ found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables common_lib_checking_CFLAGS
and common_lib_checking_LIBS to avoid the need to call pkg-config.

Ensure that you installed fuse-devel  libcurl-devel libxml2-devel

ERROR

configure: error: Package requirements (fuse >= 2.8.4 libcurl >= 7.0 libxml-2.0 >= 2.6) were not met:

Requested ‘fuse >= 2.8.4’ but version of fuse is 2.8.3

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables common_lib_checking_CFLAGS
and common_lib_checking_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.

You need to uninstall fuse

You may get the following error

Error in PREUN scriptlet in rpm package realplay
XXXXXXX was supposed to be removed but is not!

rpm -e --noscripts --nodeps fuse
rpm --rebuilddb
yum erase fuse*

cd ~/software
wget -O fuse-2_9_bugfix.zip https://github.com/libfuse/\
libfuse/archive/fuse-2_9_bugfix.zip

unzip fuse-2_9_bugfix.zip
cd libfuse-fuse-2_9_bugfix

I tried the install with version 3, this was a disaster

wget -O libfuse.zip https://github.com/libfuse/libfuse/archive/master.zip
unzip libfuse.zip
cd libfuse-master/

./makeconf.sh

Running libtoolize...
./makeconf.sh: line 4: libtoolize: command not found
config.rpath not found! - is gettext installed?

Ensure that libtool gettext gettext-devel

./configure --prefix=/usr/local
make
make install

OK now to fix some path issue and a final dependency

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

To check if it is installed and the path statement is correct

pkg-config –modversion fuse

This is from version 3

pkg-config --modversion fuse3
ln -s /usr/local/lib/pkgconfig/fuse3.pc /usr/local/lib/pkgconfig/fuse.pc

cd ~/software/s3fs-fuse-master/
./configure --prefix=/usr

ERROR

checking for common_lib_checking… configure: error: Package requirements (fuse >= 2.8.4 libcurl >= 7.0 libxml-2.0 >= 2.6) were not met:

No package ‘fuse’ found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables common_lib_checking_CFLAGS
and common_lib_checking_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.

ERROR

checking for DEPS… configure: error: Package requirements (fuse >= 2.8.4 libcurl >= 7.0 libxml-2.0 >= 2.6 libcrypto >= 0.9) were not met:

No package ‘libcrypto’ found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables DEPS_CFLAGS
and DEPS_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.

Ensure that  openssl openssl-devel

make
make install

OK to test if you have this installed

s3f3 

should then prompt you for a bucket name and credentials

s3fs: missing BUCKET argument.
Usage: s3fs BUCKET:[PATH] MOUNTPOINT [OPTION]…

ERROR

s3fs: error while loading shared libraries: libfuse.so.2: cannot open shared object file: No such file or directory

yum install fuse-libs

You need to create a .passwd-s3fs file. This is best done as root as it should be stored in the home directory and should of course be secured done.

cd ~
echo accessKeyId:secretAccessKey > .passwd-s3fs
chmod 600 ~/.passwd-s3fs

Now create a mount point for the bucket

cd /mnt
mkdir bucketname - this is only a suggestion but it keeps it 
consistent and therefore easy to debug

then issue the s3fs commands (to text if the mount works)

s3fs mybucket /path/to/mountpoint -o passwd_file=~/.passwd-s3fs
 

NOTE the -o allow ‚Äď makes the mounted directory¬†accessible by other users of the server.

If you encounter any errors, enable debug output:

s3fs mybucket /path/to/mountpoint -o passwd_file=~/.passwd-s3fs -d -d -f -o 
f2 -o curldbg

Now to permanently mount the drive when the server boots up etc… the command for the fstab is as follows :

s3fs#bucketname /mnt/mount_folder fuse allow_other 0 0

e.g.

vi /ect/fstab

s3fs#domainname-website-export /mnt/website-export fuse _netdev,allow_other 0 0

To mount the bucket

mount -a 

Advertisements

Install s3fs Ubuntu 14.04 LTS

ubuntu-logo112s3_amazon

Many years ago I install s3fs on Centos servers and wrote about it, today I needed to install in an Ubuntu server.

I went straight to the instructions and of course as in everything Open Source they were out of date. So here for the next few months are the new install instructions. ūüôā

cd ~
mkdir software
cd software
 wget https://github.com/s3fs-fuse/s3fs-fuse/archive/master.zip

Some prerequisites 

apt-get -y install automake build-essential libfuse-dev fuse libcurl3
libcurl3-dev libxml2 libxml2-dev
unzip master.zip
cd s3fs-fuse-master
 ./autogen.sh 

ERROR

— Make commit hash file ——-
— Finished commit hash file —
— Start autotools ————-
./autogen.sh: 38: ./autogen.sh: aclocal: not found
— Finished autotools ———-

Ensure that you have installed automake

./configure --prefix=/usr

ERROR

checking whether the C++ compiler works… no
configure: error: in `/root/software/s3fs-fuse-master’:
configure: error: C++ compiler cannot create executables
See `config.log’ for more details

Ensure that you have installed  build-essential

No package ‘fuse’ found
No package ‘libcurl’ found
No package ‘libxml-2.0’ found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables common_lib_checking_CFLAGS
and common_lib_checking_LIBS to avoid the need to call pkg-config.

Ensure that you installed libfuse-dev  libcurl3-dev libxml2-dev

A quick hint, if you are looking for a package e.g. libcurl you could use the following command :-

apt-cache search libcurl

make
make install

 

OK to test if you have this installed

s3f3 should then prompt you for a bucket name and credentials

s3fs: missing BUCKET argument.
Usage: s3fs BUCKET:[PATH] MOUNTPOINT [OPTION]…

You need to create a .passwd-s3fs file. This is best done as root as it should be stored in the home directory and should of course be secured done.

cd ~
echo accessKeyId:secretAccessKey > .passwd-s3fs
chmod 600 ~/.passwd-s3fs

Now create a mount point for the bucket

cd /mnt
mkdir bucketname - this is only a suggestion but it keeps it 
consistent and therefore easy to debug

then issue the s3fs commands (to text if the mount works)

s3fs mybucket /path/to/mountpoint -o passwd_file=~/.passwd-s3fs
 

NOTE the -o allow ‚Äď makes the mounted directory¬†accessible by other users of the server.

If you encounter any errors, enable debug output:

s3fs mybucket /path/to/mountpoint -o passwd_file=~/.passwd-s3fs -d -d -f -o 
f2 -o curldbg

 

Now to permanently mount the drive when the server boots up etc… the command for the fstab is as follows :

s3fs#bucketname /mnt/mount_folder fuse allow_other 0 0

e.g.

vi /ect/fstab

s3fs#domainname-website-export /mnt/website-export fuse _netdev,allow_other 0 0

Allowing access to a mounted drive from a non root user is a bit of a headache.

change the /etc/fuse.conf file and un-comment the user_allow_other

# Allow non-root users to specify the allow_other or allow_root mount options.
user_allow_other

Then add the mount line to /etc/fstab

s3fs#bucketname mount_point  fuse _netdev,allow_other,umask=700,use_rrs  0 0

The highlighted areas are the imported entries

To mount the bucket

mount -a 

 

 

 

 


Infobright Database Dump

We are about to upgrade from ICE to IEE version.

You can just take a backup of the data directory and upgrade the software, I do not trust myself so I am also dumping the database.

There is an issue though trying to do this with the ICE version, if you issue the command

mysqldump-ib -h localhost -u root -p pentaho > infobrightpentaho.sql

You will see the following
Warning: mysqldump: unknown variable ‘loose-local-infile=1’
Enter password:

You then get this error
mysqldump: Got error: 1031: Table storage engine for ‘BRIGHTHOUSE’ doesn’t have this option when using LOCK TABLES

To get around this issue the following

mysqldump-ib -h localhost -u root -p --single-transaction pentaho > infobrightpentaho.sql

Again you will get the warning, but it will start to extract the data.
Warning: mysqldump: unknown variable ‘loose-local-infile=1’
Enter password:

You can also just output the structure of the database, i.e. all the table definitions without the data.

mysqldump-ib -h localhost -u root --single-transaction --no-data pentaho |egrep -v "(^SET|^/\*\!)" | sed 's/ AUTO_INCREMENT=[0-9]*\b//' > infobright_pentaho.sql

Linux windows mounts

We sometimes you just want to mount a windows share and don’t really want to go through the pain of samba etc.. so you look for the down and dirty username and password file (password in plain text down and dirty as I said) mount command.

 

I did this a few years ago and everything worked fine. On a new installation today, I followed the normal method of achieving this as I have done in the past.

The details of the os are as follows :-

uname -r 3.10.0-123.el7.x86_64

cat /etc/redhat-release CentOS Linux release 7.0.1406 (Core)

Install the cifs utilities so mounting is possible

yum install cifs-utils
mkdir ~.sharecredentials
echo "username=DOMAIN.LOCAL\Username" > ~/.sharecredentials/credentials
echo "password=secretpassword" >>> ~/.sharecredentials/credentials
chmod -R 600 ~/.sharecredentials

Now create the directory where the mount will sit

mkdir /mnt/windowsshare

Edit fstab as you would do normally and add the following :-

//fileshare.domain.local/shareddirectory /mnt/windowsshare cifs credentials=/root/.sharecredentials/credentials,_netdev, 0 0

mount -a

And suddenly I get the error

mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

tail /var/log/messages

May 22 08:07:56 webserver-01 kernel: Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE
May 22 08:07:56 webserver-01 kernel: CIFS VFS: Send error in SessSetup = -13
May 22 08:07:56 webserver-01 kernel: CIFS VFS: cifs_mount failed w/return code = -13
May 22 08:08:21 webserver-01 kernel: Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE
May 22 08:08:21 webserver-01 kernel: CIFS VFS: Send error in SessSetup = -13
May 22 08:08:21 webserver-01 kernel: CIFS VFS: cifs_mount failed w/return code = -13
May 22 08:09:45 webserver-01 kernel: Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE
May 22 08:09:45 webserver-01 kernel: CIFS VFS: Send error in SessSetup = -13
May 22 08:09:45 webserver-01 kernel: CIFS VFS: cifs_mount failed w/return code = -13

No you would think that you might have the password wrong, but no. The syntax of the credentials file has changed, only a little but just enough.

The syntax should now read

username=

password=

domain=

so in our instance

username=username

password=secretpassword

domain=domain.local

mount -a  is now successful

 


Creating Desktop short cuts with GPO

OK, some times I hate Windows, there is not consistency, but I suppose if there was we would be out of a job :).

 

OK creating a desktop short cut and publishing via GPO is pretty easy there are loads of articles in Google (technet.microsoft.com) , and the reason you have landed here is because you are getting the following error.

The computer ‘Task Name’ preference item in the ‘Test Policy {BACD99EF-75BF-496E-8FDD-BDC3704DBB1D}’ Group Policy object did not apply because it failed with error code ‘0x80070057 The parameter is incorrect.’ This error was suppressed.)

Below is a screen shot of what the parameters should look like for publishing Word, but it is pretty much the same for any application.

GPO_Shortcut_publish

 

What you need to look out for is two things :-

 

  • The target path should not be encapsulated as you would do in many areas of Windows.
    • e.g. “C:\program files\” ¬†should just be C:\program files
  • Ensure that you enter a start in location, this is the same as the target path excluding the application reference.

 

 

 


Postgres Cheat Sheet

I am new to Postgres, but not Oracle, MySQL, Infobright etc… so it was a bit of a shock to the system in the way that you connect and use Postgres. It just takes some getting used too.

Anyway I will add to this as I learn more.

To connect

su - postgres
psql - This gets you into the editor for Postgres
psql <Database Name> -¬†Get you accessing the correct database ūüôā

Once you are in you can use these (I picked this up from a forum board I came accross)

\d [NAME] describe table, index, sequence, or view
\d{t|i|s|v|S} [PATTERN] (add "+" for more detail)
 list tables/indexes/sequences/views/system tables
\da [PATTERN] list aggregate functions
\db [PATTERN] list tablespaces (add "+" for more detail)
\dc [PATTERN] list conversions
\dC list casts
\dd [PATTERN] show comment for object
\dD [PATTERN] list domains
\df [PATTERN] list functions (add "+" for more detail)
\dg [PATTERN] list groups
\dn [PATTERN] list schemas (add "+" for more detail)
\do [NAME] list operators
\dl list large objects, same as \lo_list
\dp [PATTERN] list table, view, and sequence access privileges
\dT [PATTERN] list data types (add "+" for more detail)
\du [PATTERN] list users
\l list all databases (add "+" for more detail)
\z [PATTERN] list table, view, and sequence access privileges (same as \dp)

Creating a LAMP server Fedora Core

This was done on Fedora Core 10 OS but I suspect will work with most version above. The original guide is here

But the brief guide is as follows

yum install httpd

chkconfig httpd on

yum install mysql mysql-server

chkconfig mysqld on

yum install php php-mysql

if you want python and perl
yum install mod_python MySQL-python
yum install perl mod_perl perl-DBD-mysql

service httpd start

To just test if this works turn off the firewall
service iptables stop

If you want to perminatly turn it off
chkconfig iptables off

Then issue this

echo “<?php phpinfo(); ?>” > /var/www/html/index.php

Open a web browser and http://x.x.x.x where x is the server ip address or name, you should get a php config page. If you don’t then you are on your own I am afraid, start by checking the httpd logs.

To remove the text file

rm -f /var/www/html/index.php

You now have a working LAMP server

Sometime you will need the mod_rewrite module

vi /etc/httpd/conf/httpd.conf

Add this line

RewriteEngine on

Also change this line

AllowOverride None to    AllowOverride all

Note I think this should only be done on internal and test servers, please check this setting out before you put it into production

For more details on the RewriteEngine see here

Don’t forget to restart the httpd service everytime you make a change to this file

service httpd restart