Installing LogMeIn Hamachi for Linux (Beta) command line version on Ubuntu 16.04 LTS

I have used Hamachi for years, and while it does not seem to be heavily marketed, if you are an IT Pro, I do not know how you live with out this software.

I recently configured a server with Ubuntu 16.04 LTS, and wanted to have remote access to my server from my mobile phone / other computers while on the phone.

When I tried to install LogMeIn Hamachi for Linux (Beta) on Ubuntu 16.04 LTS. The installation failed. As lsb-core was not available. The documentation on Hamachi clearly states that lsb 3.0 or greater is required.

LogMeIn Hamachi for Linux (Beta)

Please make sure you have the LSB 3.0 (or above) package installed prior to attempt installing the product. If you had an older version of Hamachi for Linux, please make sure you uninstall it prior to attempt installing the latest software and read the README file located in the download package. Make sure that you have an ARMv4T or better processor and tun/tap driver installed before running Hamachi for Linux with ARM support.

Do the following to install the LogMeIn Hamachi Client.

# The following are a list of commands you are run to install Hamachi in one step
# go to downloads folder
cd ~/Downloads
# download hamachi client (64 bit)
wget https://secure.logmein.com/labs/logmein-hamachi_2.1.0.139-1_amd64.deb
# download hamachi client (32 bit)
wget https://secure.logmein.com/labs/logmein-hamachi_2.1.0.139-1_i386.deb
# install lsb from Ubuntu 14.04
#  Out of desperation I've downloaded and installed the lsb-core package for Ubuntu 14.04,
sudo add-apt-repository "deb http://cz.archive.ubuntu.com/ubuntu trusty main" && sudo apt-get update && sudo apt-get install lsb-core -y
# install hamachi client
dpkg -i ./logmein-hamachi_2.1.0.139-1_amd64.deb
# join network
hamachi do-join 000-000-000

Update the last line of code with your appropriate network id before running this.

Then go to your LogMeIn Central and approve the network join request.

**It Worked** Connected to Linux Server from My iPhone 6 Plus 

ssh-to-server-with-hamachi-from-iphone-like-a-boss

Happy Computing.

The two apps I tested this with are:

Server Auditor

Reflection for UNIX – SSH Client

 

SQL Server Error: 9001, Severity: 21, State: 5 – The log for database `database_name` is not available. Check the event log for related error messages. Resolve any errors and restart the database. | Error: 9001, Severity: 21, State: 5. | Matched: %Error%,

Upon being greeted with the following logs, I started investigating the issue.

I found the issue to be caused by the database being closed, and not any major underlying disk issue.

Check if your database has auto_close enabled.


select @@SERVERNAME AS server_name,getutcdate() as report_date_utc,name as database_name, is_auto_close_on , state_desc, user_access_desc
from sys.databases
where is_auto_close_on = 1
order by name asc

If autoclose is on, switch this database to no longer use auto_close.

Before making any changes, check the integrity of the database. If there are no errors generated by this command, then  move on to changing the auto_close option.

dbcc checkdb('database_name')

alter database [database_name] set offline with rollback immediate;
go
alter database [database_name] set online;

alter database [database_name] set AUTO_CLOSE OFF;

NoSQL company Basho loses CEO and CTO

Gigaom

Basho, a NoSQL startup whose Riak database competes against the likes of Cassandra in scale-out environments, has lost its CEO Greg Collins, CTO Justin Sheehy and Chief Architect Andy Gross. In an interview with the Register, Sheehy said the departures aren’t as bad as they look and that the company is in good hands. Perhaps, although whoever replaces Collins will be the company’s fourth CEO since it was founded in 2007, and neither of the company’s co-founders remain. Basho has raised more than $31 million in venture capital, with its last funding round of $11.1 million coming in July 2012.

View original post

Connecting to SQL Server with R using RJDBC

Download the Microsoft from here

Save the files to a convenient location: I chose C:\jdbc\sqljdbc_4.0\

Many posts show the class name as “com.microsoft.jdbc.sqlserver.SQLServerDriver” this is incorrect.

com.microsoft.jdbc.sqlserver.SQLServerDriver # incorrect class name
com.microsoft.sqlserver.jdbc.SQLServerDriver # correct class name

My Machine Setup:

  • Windows 7  Enterprise – 64 Bit
  • R Studio Version 0.97.551
  • R version 3.0.1 (2013-05-16), platform x86_64-w64-mingw32
  • Microsoft SQL Server 2008, 2012 Installed

If you use a tool like 7-zip to explode the jar file you will notice the class files are located at:

“C:\jdbc\sqljdbc_4.0\enu\sqljdbc4\com\microsoft\sqlserver\jdbc\SQLServerDriver.class”


# reference document on RJDBC
# http://cran.r-project.org/web/packages/RJDBC/RJDBC.pdf
# install.packages("RJDBC",dep=TRUE)
library(RJDBC)
drv <- JDBC("com.microsoft.sqlserver.jdbc.SQLServerDriver" , "C:/jdbc/sqljdbc_4.0/enu/sqljdbc4.jar" ,identifier.quote="`")
conn <- dbConnect(drv, "jdbc:sqlserver://SERVERNAME:55158;databaseName=master", "sa", "password")
d <- dbGetQuery(conn, "select * from sys.databases where database_id <= 4 ")
summary(d)

You can download this script from here

I also tested using the Microsoft Driver and connecting to the same SQL Server using Ubuntu.

I have tested the connecting to sql server using R from Windows, Ubuntu and OS X. Below are links to the gists which contain the code.

Windows

Ubuntu

Mac OS X 

For the code used in that example look at my gist

Understanding I/O Performance for SQL Server on Amazon EC2 / AWS.

Having run SQL Server on EC2, be advised EC2 is a very stable platform, however you are required to pay for the performance you need. I have done several repeated tests, and my experience is that Amazon gives you exactly what you pay for.

If you are having a performance problem, you need to look at your entire infrastructure and ensure you have not over provisioned one aspect of the system, while under-provisioning another.

The AWS blog tells the Dedicated network throughput for each instance size. I have chosen to expand on this to show you the best choices you have for EBS volume configuration for these instance types. If your only objective is maxing out I/O performance the table below should be sufficient. If however you need large amounts of space, you may choose to increase your drive count, and reduce your Provisioned IOPs per volume.

 

There are several key things, I have learned and wanted to state for you:

1) Your instance size determines Guaranteed Network Throughput

2) Everything that goes off your server traverses that single NIC ( Disk I/O, Network I/O, everything)

3) 1000 IOPS is the approximately 16 MB per second on AWS, the block size is 16K.

4)  If you are running SQL with Terabytes of Data you most likely need an instance size is rated as High for Network Performance and potentially one of the newer instances which promise 10Gigabit throughput.

5) Please do configure your server to be EBS optimized.

6) EBS volumes are currently limited to 1TB in size, so to create larger disks use Software RAID in your Operating System. (p.s. Amazon will deliver the IOPS of all volumes in the RAID Array — just remember you cannot exceed the Dedicated Throughput of your NIC).

7) If your backups are running terribly long and you can’t seem to figure out why… you most likely have an I/O bottleneck related to your server configuration. Amazon is NOT the problem.

Below is the table, hope it works for you..

 

Any questions / comments that may assist in improving this post are appreciated.

 

 

Instance Type Dedicated Throughput Dedicated Through Put (MB/second)  Max IOPs Through Put for EBS Purchasing   Most Optimized Purchase Size   Min Drive Size (GB) @ MAX IOPS   Min Drive Count to Get Maximum IOPS 
m1.large 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m1.xlarge 1000 Mbps 125                7,812.50                    8,000                             800                   2
 m2.2xlarge (new) 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m2.4xlarge 1000 Mbps 125                7,812.50                    8,000                             800                   2
 m3.xlarge (new) 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m3.2xlarge (new) 1000 Mbps 125                7,812.50                    8,000                             800                   2
 c1.xlarge (new) 1000 Mbps 125                7,812.50                    8,000                             800                   2
Instance Type Dedicated Throughput Dedicated Through Put (MB/second)  Max IOPs Through Put for EBS Purchasing   Most Optimized Purchase Size   Min Drive Size (GB) @ MAX IOPS   Min Drive Count to Get Maximum IOPS 
m1.large 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m1.xlarge 1000 Mbps 125                7,812.50                    8,000                             800                   2
 m2.2xlarge (new) 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m2.4xlarge 1000 Mbps 125                7,812.50                    8,000                             800                   2
 m3.xlarge (new) 500 Mbps 62.5                3,906.25                    4,000                             400                   1
 m3.2xlarge (new) 1000 Mbps 125                7,812.50                    8,000                             800                   2
 c1.xlarge (new) 1000 Mbps 125                7,812.50                    8,000                             800                   2