Pages

Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

DDNS and QNAP NAS

I was looking recently to allow remote access into my home QNAP NAS from remote locations.
My IP address at home is dynamic so I needed a DDNS provider and some port forwarding on the home router.
My setup at home is simple. My internet connection is over a DSL line. I have a DSL router from the ISP serving as an Wifi access point for my devices at home. It also has 4 ethernet ports and to one of them I have my QNAP NAS connected.

After looking throgh a few reviews on the internet, I choosed DuckDNS. What i liked about it the most is their variety support in operating systems and the way the dynamic update is done - through an HTTPS GET request (can use also HTTP GET, but HTTPS is recommended). Secure and implemented in any decent OS. Full specs here.
You log in with one account from various social networks (reddit, G+, facebook, twitter) and you get a token assigned with your account. Further, at this time you can use 5 subdomains.

The QNAP itself can act as a DDNS client for a few providers. The whole list is below. Duckdns is not one of them.


To make use of duckdns on the QNAP NAS I've added in the /etc/config/crontab file an entry to update my IP every 2 hours:
 0 */2 * * * /share/Valentin/duckdns/duck.sh >/dev/null 2>&1

Catfish fails to update database

I have just tried Catfish for local files search. I am using it (at time of writing version 1.2.2) with Debian Jessie  and Xfce desktop.
Every time I opened catfish I was prompted to update the database as it was never updated. I was updating this with administrative rights, but after closing and opening the application the same message was displayed.



From the message prompt catfish looks for the database in /var/lib/locate.findutils/locate.findutils.db.
The catfish utility is based on locate and find and the locate database is updated by the updatedb command.
The man page of updatedb for my debian system says:

       --output=dbfile
              The  database  file  to  build.  Default is system-dependent.  In Debian GNU/Linux, the default is /var/cache/locate/locatedb.

cli remote management of ESXi virtual machines over ssh

One of the ways to manage ESXi virtual machines is through vim-cmd commands. You can look at a quick tutorial of the command here.

Let's say there are a multitude of ESXi servers in your environment and you'd like to manage some virtual machines through cli.
One way to do this is to run vim-cmd commands over ssh. For example, I have 2 ESXis named in the example esxi1 and esxi2. On both of them I have one VM:
~$ ssh root@esxi1.localdomain "vim-cmd vmsvc/getallvms"
Vmid             Name                                         File                                  Guest OS       Version             Annotation
7      Linux                       [datastore1] Linux/Linux.vmx                                 debian6_64Guest    vmx-08

~$ ssh root@esxi2.localdomain "vim-cmd vmsvc/getallvms"
Password: 
Vmid           Name                                      File                                  Guest OS        Version             Annotation
13     VSRX                   [datastore1] VSRX/VSRX.vmx                                   otherGuest          vmx-09    VSRX OVF Template
So, this works fine and nice and actually there's nothing special about it. It's just running remote commands over ssh.

If you'd like to make things more easy, you can use ssh public key authentication for the remote esxi hosts (no need to type in the password every time you want to run a command).
I wrote some bash functions to make it even easier (to remember) and shorter to type.
This is what I have among other functions and things in my .bashrc file:
function start_vm () { ssh root@${1}.localdomain "vim-cmd vmsvc/power.on" "$2";}
function stop_vm () { ssh root@${1}.localdomain "vim-cmd vmsvc/power.off" "$2";}
function reboot_vm () { ssh root@${1}.localdomain "vim-cmd vmsvc/power.reboot" "$2";}
function getallvm () { ssh root@${1}.localdomain "vim-cmd vmsvc/getallvms" ;}
function powerstate_vm () { ssh root@${1}.localdomain "vim-cmd vmsvc/power.getstate" "$2";}
function getnetwrorks () { ssh root@${1}.localdomain "vim-cmd vmsvc/get.networks" "$2";}

Recovering corrupted gzip files. gzip: compressed_file.gz: unexpected end of file


A tool to recover gzip corrupted compressed files is gzrecover. This comes with in the recovery toolkit package gzrt.

$gzip -d compressed_file.gz
gzip: compressed_file.gz: unexpected end of file

$file compressed_file.gz
compressed_file.gz: gzip compressed data, from Unix

$gzrecover compressed_file.gz
$ls compressed_file*
compressed_file.gz compressed_file.recovered

ssh tunnels set up and port forwarding

SSH tunnels allow you to forward a local TCP port to a remote machine and vice versa. The tunnel option is available in many ssh clients. I will give the example here on how to create SSH tunnels with putty and the openssh-client.
I will consider in the below that we want to access the SERVER on port 80 (http server).

Scenario 1. SSH tunnel setup with local port forwarding.


The SSH tunnel is shown with the red arrow. In order to access the SERVER through the ssh tunnel the connection will have to be made on the CLIENT's local forwarded port (2000 in the example). The traffic between the CLIENT and SSH-HELPER is encrypted by ssh, the traffic between the SSH-HELPER and the SERVER is not encrypted.

SSH tunnel  is set up on the CLIENT:
openssh-client:

ssh -L *:2000:server:80 ssh-helper

The '*' before the local port to be forwarded 2000 denotes that the port 2000 should be listening on all available interfaces on the client. This goes according to the openssh-client configuration.
putty:

The tunnel configuration is done under Connection - SSH - Tunnels. Source port is the local port, destination is where the connection will be forwarded after exiting the SSH tunnel.
After you specify source port and destination, you need to click "Add" for the configuration to take effect.
If you want to access the remote server from other hosts, make sure you check the box  "Local ports accept connections from other hosts", otherwise the port 2000 will be opened only for the loopback address (127.0.0.1)

Linux gnome add menu and desktop entry for applications


Users can create shortcuts for an application to appear in menus, desktop,  etc. by creating a .desktop file. This desktop file contains a listings of configuration for the application. In order for your desktop environment to "see" this file, this .desktop file will have to be placed either in /usr/share/applications - for system wide applications - or in ~/.local/share/applications - if the application should be available for a single user

Once the file is placed in this location the desktop uses this file to:


  • put the application in the Main Menu
  • list the application in the Run Application... dialog
  • create appropriate launchers in the menu or on the desktop.
  • associate the name and description of the application.
  • use the appropriate icon.
  • recognize the MIME types it supports for opening files.
Here's an example of a desktop file I have created for tinyCA application. I am running Debian Jessie and this application does not come with a any menu entries or desktop shortcut.

$ cat ~/.local/share/applications/tinyca2.desktop

[Desktop Entry]
Version=1.0
Type=Application
Name=TinyCA Certificate Authority
NoDisplay=false
Categories=Network;
Icon=security-low
Exec=/usr/bin/tinyca2
Terminal=false
Comment=TinyCA2 certificate authority
The entries one by one:

[Desktop Entry] - identifies the group name to which the desktop entry belongs. A group is name is enclosed in [ ] and there can be more than one group in a desktop file. [Desktop Entry] group is required by the basic desktop file entry
Version - version of the desktop entry specification. This field is not required

Connect Evolution email client to Exchange office365 server

Evolution is a personal information management application that provides integrated mail, calendaring and address book functionality.
To connect to office365 exchange you will need the evolution program and the evolution-ews plugin.

Installation:
% sudo apt-get install evolution evolution-ews
Account setup:
1. Obtain the information about your domain's office365 servers.
To find the exchange web services URL based on the verified answer on this office365 post:
-Logon to your e-mail account using Outlook Web App (OWA).
-Click Options > See All Options > Account > My Account > Settings for POP, IMAP, and SMTP access.
-In the list of entries, locate the server name. And the URL of exchange web service for your mailbox is the URL: "https://" + "Server name" + "/EWS/Exchange.amsx".
So, in my case the Exchange Web Services URL is https://outlook.office365.com/EWS/Exchange.asmx

2. In the evolution wizard enter the details.

Installing Debian wheezy from USB over serial console


I wanted to install Debian wheezy on a machine without a video card and no CDROM. I had a 1GB USB and I tried initially the easy way, as described in https://www.debian.org/releases/stable/amd64/ch04s03.html.en#usb-copy-isohybrid by just copying the cd image to my /dev/sda (USB stick) but I did not got any output on the console. I copied the iso file to another linux machine, mounted it as a loop device, made modifications in the isolinux.cfg file to redirect everything on the console, by adding the kernel parameters to look like below and then recreated the iso file but that still did not output anything on the serial console.

 isolinux.cfg
serial 0 9600
default install
prompt 0
timeout 100
label install
  kernel install.amd/vmlinuz
  append console=ttyS0,9600n8 initrd=/install.amd/initrd.gz --quiet
So I gave this up quite easy, and I took another approach. What I did next I followed the flexible way, as described in the debian documentation, but with some changes: 1/ I set up 2 partitions on the USB disk (/dev/sda in this example), both FAT16 (e code in fdisk), both 500MB and set the bootable flag on /dev/sda1 2/ pretty much followed the documentation and setup fat16 on /dev/sda1 and /dev/sda2:
mkdosfs /dev/sda1
mkdosfs /dev/sda2
3/ I installed the MBR on /dev/sda
install-mbr /dev/sda
4/ Installed the syslinux bootloader on /dev/sda1
syslinux /dev/sda1
5/ Copy the kernel and the initial ram image on /dev/sda1. This is done by mounting /dev/sda1 on let's say /mnt and the copying from the netinst cd vmlinuz and initrd.gz from the folder install.amd/ to /mnt (where I have /dev/sda1 mounted). Then I have created the syslinux.cfg file on /mnt according to the documentation and added the entries:

ssh keepalives and tcp keepalives in openssh

The SSH connection can be kept alive either with SSH keepalive packets (encrypted) or with TCP keepalive packets. This allows also to detect hanging sessions and disconnect the hanging client/server when a connection has become inactive.

On a open SSH server, to control the SSH keepalive packets the parameters are:
ClientAliveCountMax 3 (default)
ClientAliveInterval 0 (default) - means the SSH keepalive packets will not be sent by the server

Replaying packets with tcpreplay

Tcpreplay is a suite of tools that allows editing and replaying previously captured traffic in libpcap format. This can come handy in many situations, one common use is traffic pattern based behavior re-creation in a lab environment.
Tcpreplay suite comes with the following tools:
  • tcpprep - multi-pass pcap file pre-processor which determines packets as client or server and creates cache files used by tcpreplay and tcprewrite
  • tcprewrite - pcap file editor which rewrites TCP/IP and Layer 2 packet headers
  • tcpreplay - replays pcap files at arbitrary speeds onto the network
  • tcpliveplay - Replays network traffic stored in a pcap file on live networks using new TCP connections
  • tcpreplay-edit - replays; edits pcap files at arbitrary speeds onto the network
  • tcpbridge - bridge two network segments with the power of tcprewrite
  • tcpcapinfo - raw pcap file decoder and debugger
To exemplify the use of tcpreplay, let's say we have the following setup:
Now in this setup we're interested in how our DUT device (Device Under Test) is reacting given a specific traffic pattern that is let's say very specific to this environment. I will assume the DUT is a Layer 3 device. 

mdadm tips on Linux software RAID

mdadm is a tool for managing, creating and reporting on Linux software RAID arrays.

I will describe some tips which I found useful at the moment.

Improve RAID1 re-sync time with write-intent bitmap

The RAID driver writes out periodically bitmap information recording which areas of the RAID component have been modified since the RAID array was last in sync.

If, for example one of two members of a RAID1 array fails and is removed from the array, md (the multiple disk software RAID drive) will record bits to the bitmap relating to the changes the active member is undertaking since the two members were last in sync. If the same failed/removed drive is re-added to the RAID1 array, md will notice and will recover only the portions indicated by the bitmap. In this way a lengthy re-sync is avoided (a full re-sync is normally needed if the drives are not in sync when the array starts up).

Linux ephemeral port range for TCP/UDP connections over IPv4

The range of ephemeral ports a client program can use (unless otherwise specified by the program) on modern Linux OS distributions by default is from 32768 till 61000 (for systems with more than 128 MB RAM) and from 1024 till 4999 (or even less) for systems with less than 128MB of RAM. This range is defined in the kernel parameter /proc/sys/net/ipv4/ip_local_port_range and it affects both TCP as well as UDP client connections.
Should there will be a need to change this range to extend the range(for example setting the lowest port number to 15000) we cal use:
echo "15000 61000" > /proc/sys/net/ipv4/ip_local_port_range

To make this change persistent after reboots, we can use sysctl.

Resizing extended partitions with GNU parted

This post will show how to resize an extended partition using GNU parted. There are many tools for partitioning available, but I wanted to use a tool which was by default installed in my test system (which runs CentOS Linux).

In summary "The GNU Parted program allows you to create, destroy, resize, move,and copy hard disk partitions. Parted can be used for creating space for new operating systems, reorganizing disk usage, and copying data to new hard disks."

On my test CentOS system I had three primary extended partitions created and one extended as below:
Model: ATA ST3500320AS (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 32.3kB 535MB 535MB primary ext3 boot
2 535MB 11.0GB 10.5GB primary ext3
3 11.0GB 12.1GB 1078MB primary linux-swap
4 12.1GB 37.1GB 25.0GB extended
5 12.1GB 37.1GB 25.0GB logical lvm
As it's visible I had plenty of space on my hard drive (500GB), but I could use only approximately 7% (as I had 3 primary partitions and one extended there's no way in which I could create another partition).

Core files name control in Linux kernel 2.6

By default on many Linux distributions core files generation is disabled.
If you choose to enable it (because let's say you might have applications crashing and you want to see an image of the application's process memory at the time of termination) and control the name of the core file generated you need to edit /proc/sys/kernel/core_pattern. The default value in this file is "core", which means that core files will be dumped in the working directory and the filename is core. If you want to change this pattern and dump the core files in /tmp direcory and also append the process ID to the file name do the following (be aware that this is a system wide settting):

echo /tmp/core%p > /proc/sys/kernel/core_pattern
or
sysctl -w kernel.core_pattern=/tmp/core%p
If you choose to make the change persistent after reboot edit the /etc/sysctl.conf file and add:

NAT & IP forwarding on Linux gateway

Suppose we have only one publicly routable IP address assigned by our ISP and we want to be able to connect from the computers located in our internal LAN to the internet. Using private IP addresses is a common way to access the internet and internal shared resources
For the ease of explanation/understanding we’ll add some details in our scenario.

eth0 – the network interface card (NIC) connected to the ISP net
eth1 – the NIC connected to the internal LAN

As for the gateway there are some basic requirements:
- we’ll need at least 2 network interface cards (one/more connected to the internal LAN switch/hub, one/more connected to your ISP provider net) supported by your kernel
- support for networking, iptables and NAT in the kernel (for default 2.6/ 2.4 kernels on major Linux distributions this is enabled by default)
- enable IP forwarding (disabled by default on modern Linux distribution). To enable IP forwarding there are several ways to accomplish this. The common accepted method is through sysctl

Run the following command as root:
sysctl -w net.ipv4.ip_forward = 1
To make the change permanent we can add the following line in /etc/sysctl.conf
net.ipv4.ip_forward = 1
To enable the change made to the /etc/sysctl.conf file run
sysctl -p /etc/sysctl.conf
Finally, to allow hosts connected in the internal LAN to access internet resources configure the Linux gateway as:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Duplex mismatches 100BASE-TX

I encountered a situation a few days ago where the two devices were not correctly negotiating the duplex setting. One box was a Cisco 2950 switch, the other was a Linux machine.
Although the ehternet interfaces of the 2 boxes were both capable of 100BASE-TX (full duplex) it was clear that the Linux machine's eth1 was running in half duplex mode
A tool on Linux which can display/change an ethernet card setting is ethtool.
Now for a bit of theory:

The link speed is determined by electrical signaling, so that either end of a link can determine what
the other end is trying to use. If both ends of the link are configured to autonegotiate, they will use
the highest speed that is common to them.
A link’s duplex mode, however, is negotiated through an exchange of information. This means that
for one end to successfully autonegotiate the duplex mode, the other end must also be set to autonegotiate.
Otherwise, one end will never see any duplex information from the other end and won’t
determine the correct common mode.