Wednesday, September 28, 2011

Getting `GLIBCXX_3.4.9 not found' when starting the license manager for MATLAB

If you are starting the latest version of MATLAB License Server,  you may encounter this error`GLIBCXX_3.4.9 not found'

For more information do look at MathWorks "Why do I get an error 'GLIBCXX_3.4.9 not found"

According to the mathwork website,



Solution:

This issue is caused by a missing or outdated libstdc++.so.6 as required by the keycheck application (R2011a) or the MLM vendor daemon (R2011b). Both the R2011a keycheck and R2011b MLM vendor daemon require libstdc++.so.6.0.10. Refer to your operating system documentation for information on how to update or install a missing library.

If the necessary version of the library is not available for your Linux distribution it can be copied and installed from the MATLAB installation files following the instructions below:

NOTE: $MATLAB refers to the MATLAB installation location (ex: /usr/local/MATLAB/R2011b)
NOTE: $ARCH refers to the machine architecture (ex: glnx86 for Linux 32-bit or glnxa64 for Linux 64-bit)

If MATLAB is installed in addition to the FlexNet license manager, skip directly to step 3.

1. Create a subdirectory within the MATLAB installation folder as shown below:

[root@localhost ~] mkdir -p $MATLAB/sys/os/$ARCH


2. Copy the libstdc++.so.6.0.10 library from the MATLAB installation files (either an installation DVD or the extracted downloaded installer archive) into the newly created directory:

[root@localhost ~] cp /media/MATLAB_R2011b/bin/$ARCH/libstdc++.so.6.0.10 $MATLAB/sys/os/$ARCH


3. Run 'ldconfig' to create symbolic links to the new library and update the dynamic linker cache:

[root@localhost ~] ldconfig $MATLAB/sys/os/$ARCH

World's largest Cloud Storage System designed by SDSC



San Diego Supercomputer Center (or SDSC) has designed the World's largest Cloud Storage System that is specifically targeted for academic and research use. Dubbed the SDSC Cloud, the total cloud capacity will start at raw capacity of 5.5 PBand is scalable to "Hundreds of Petabytes". Rates are starting at US$3.46 a month for 100GB storage.

For more information do look at
  1. San Diego Supercomputer Center launches world's largest academic cloud storage system 
  2. SDSC Cloud Storage Services

Monday, September 26, 2011

Understanding VXLAN Virtual-Physical-Cloud L2/L3 Networks by ARISTA

Interesting Article....From ARISTA

Understanding VXLAN Virtual-Physical-Cloud L2/L3 Networks by ARISTA (pdf)

Video interview - Vmware CTO Steve Herrod and Arista Founder, CDO, Chairman Andy Bechstolsheim

Video interview - Vmware CTO Steve Herrod andArista Founder, CDO, Chairman Andy Bechstolsheim

  1. Introduction to the CTO Video Series with Steve Herrod
  2. Part I: The State of Cloud Computing: Applications as a Service
  3. Part II: The Semantics of Cloud
  4. Part III: Moore's Law and its Impact on Software Infrastructure and Network Capacity
  5. Part IV: Storage: From Mechanics to Silicon and Network Scalability
  6. Part V: Andy's Current Software Focus at Arista
  7. Part VI: Power Efficiency and Chip Design
  8. Part VII: Power and the Datacenter: The Impact of Software Improvments
  9. Part VIII: Private vs Public Cloud: Transparency and Economics
  10. Part IX: Security and the Cloud
  11. Part X: Arista and the Importance of Low Latency
  12. Part XI: Today's Financial Trading Model
  13. Part XII: What Technologies are Interesting to Andy?
  14. Part XIII: Who Does Andy Admire: Einstein vs The Hardy Boys
  15. Part XIV: The Importance of Science Education
  16. Part XV: Wrap-up

Server has no node list when executing pbsnodes -s

If you see this error "Server has no node list" when you execute "pbsnodes -s 192.168.1.1" (your torque server DNS or IP), it is due to the missing "nodes" file that was supposed to be at /var/spool/torque/server_priv/

The node file should show something like this
## This is the TORQUE server "nodes" file.
##
## To add a node, enter its hostname, optional processor count (np=),
## and optional feature names.
##
## Example:
##    host01 np=8 featureA featureB
##    host02 np=8 featureA featureB
##
## for more information, please visit:
##
## http://www.clusterresources.com/torquedocs/nodeconfig.shtml

compute-c00     np=12
compute-c01     np=12
compute-c02     np=12
compute-c03     np=12
compute-c04     np=12

Restart the pbs_sched and pbs_server services
# service pbs_server restart
# service pbs_sched restart

Sunday, September 25, 2011

Using iperf to measure the bandwidth and quality of network

According to iperf project site. This writeup is taken from iPerf Tutorial by OpenManiak. For a more detailed and in-depth writeup, do read up on the iPerf Tutorial

Iperf was developed by NLANR/DAST as a modern alternative for measuring maximum TCP and UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP characteristics. Iperf reports bandwidth, delay jitter, datagram loss.

Iperf can generate traffic using TCP and UDP Traaffic to perform the following kinds of test
  • Latency (response time or RTT): can be measured with the Ping utility.
  • Jitter: can be measured with an Iperf UDP test.
  • Datagram loss: can again, be measured with an Iperf UDP test.
  • Bandwidth tests are done using the Iperf TCP tests
A collection of selected iperf usages is written in Using iperf to measure the bandwidth and quality of network from linuxCluster.wordpress.com

Saturday, September 24, 2011

Cannot resolve default server host for Torque, check server_name file

If you are encountering Torque Error something like

Cannot resolve default server host 
'headnode.cluster.com' - check server_name file.
pbsnodes: cannot connect to server headnode.cluster.com, 
error=150010 (Access from host not allowed, or unknown host)


To resolve this issues, you have to look at 3 possible mis-configured areas
  1. Ensure your /etc/sysconfig/network reflect the correct hostname
  2. Ensure your /var/spool/torque/server_name are the same for both head and compute nodes
  3. Ensure the environment variable PBS_DEFAULT is reflecting the correct hostname. For my situation, I have placed the environment variable on /etc/profile.d/torque.sh
You should be able to eliminate the issue.

Tuesday, September 20, 2011

/.../libfftw3f.a: could not read symbols: Bad value when compiling Gromacs

If you encounter this error like " /.../libfftw3f.a: could not read symbols: Bad value when compiling Gromacs ", it is likely due to that the compilation of FFTW did not enable shared libraries "--enable-shared" . GROMACS seems to require shared libraries extension from FFTW. So do follow the steps listed in
  1. Installing FFTW
  2. Installing Gromacs 4.0.x on CentOS 5.x
you should have a clean compilation

Monday, September 19, 2011

How to enable Directory listing on Apache

To enable Directory Listing for a particular folder at Apache, you need to set this option on at /etc/httpd/conf/httpd.conf or put your new settings at /etc/httpd/conf.d/yourfile.conf. Either way is find

<Directory /home/mysite/public_html>
        Options Indexes FollowSymLinks
        AllowOverride None
</Directory>

Tuesday, September 13, 2011

/usr/bin/ld cannot find -lliblapack.so

We were linking our c program to gcc which require the lapack and blas libraries. I did not compile my own libraries but uses instead the CentOS lapack and blas libraries. For more information on installing lapack and blas Installing lapack, blas and atlas on CentOS 5.4

We will compiling our codes
$ gcc exact.c -L/usr/lib64 -lliblapack.so.3 -llibblas.so.3
The result error
/usr/bin/ld: cannot find -lliblapack.so.3
collect2: ld returned 1 exit status

I checked my /etc/ld.so.conf.d/ to ensure that I have /usr/lib64. Of course remember to do a ldconfig if you have made any changes

If the -L and -I does not allow linking process to scan correctly, do a direct complete pathing of the library like the one below and it compile nicely
$ gcc exact.c /usr/lib64/liblapack.so.3 /usr/lib64/libblas.so.3

In case you do not know where to locate your libraries, you can do a
$ locate liblapack

For similar notes on the linking challenges, you may want to explore this forum thread
problems in linking lapack to g77

Sunday, September 11, 2011

Recommended sshd_config for OpenSSH

There are a few settings at /etc/ssh/sshd_config we can set to improve security, performance and user experience. Many of this information comes from SSH The Secure Shell, 2nd Edition from O'Reilly

1. Using SSH-2 Protocol and disable SSH-1 protocol altogether
Protocol 2

2. Ensure that the HostKey and PidFile are located on a machine's local disk and not over the NFS mount. The default setting should be in the machine local file like those below
HostKey /etc/ssh/ssh_host_key
PidFile /var/run/sshd.pid

3. File and directory permissions
The StrictModes value requires users to protect their SSH-related files and directories or else they will not authenticate.The default values is yes
StrictModes yes

4. Enable KeepAlive messages
Keepalive messages are enabled so that the connections to clients that have crashed or unreachable will terminate rather than be an orphaned process which require manual intervention by sysadmin to eliminate it.
Port 22 
ListenAddress 0.0.0.0
TcpKeepAlive yes

5. Disable Reverse DNS lookup
UseDNS no

6. Select a shorter grace login time
The default grace login is 2 minute which you might want to change. The value here is 30 seconds
LoginGraceTime 30

7. Authentication
The default setting are fine unless you wish to use Public-Key Authentication and wish to disabled Kerberos, Interactive and GSSAPIAuthentication
PubkeyAuthentication yes
PasswordAuthentication no
PermitEmptyPasswords no
RSAAuthentication yes
RhostsRSAAuthentication no
HostbasedAuthentication no
KerberosAuthentication no
ChallengeResponseAuthentication yes
GSSAPIAuthentication no
IgnoreRhosts yes

8. Access Control
If you wish to allow only selected users or groups to use ssh, you would like to use
AllowGroups users
AllowUsers me_only
DenyGroups black_list
DenyUsers hacker_id
For more information, see How do I permit specific users SSH access?


9. Securing TCP port forwarding and X forwarding
AllowTcpForwarding yes
X11Forwarding yes

Saturday, September 10, 2011

rpm needed for blas, blacs, scalapack for CentOS

If you are installing blas, blacs, scalapack in CentOS manually, you will need these packages.

mpi-selector-1.0.2-1.el5.noarch.rpm
libgfortran-4.1.2-50.el5.x86_64.rpm
lapack-3.0-37.el5.x86_64.rpm
lam-7.1.2-14.el5.x86_64.rpm
lam-libs-7.1.2-14.el5.x86_64.rpm
blacs-1.1-24.el5.1.x86_64.rpm           [EPEL] 
blacs-devel-1.1-24.el5.1.x86_64.rpm [EPEL]
blas-3.0-37.el5.x86_64.rpm             
blas-devel-3.0-37.el5.x86_64.rpm  
scalapack-1.7.5-1.el5.x86_64.rpm    [EPEL]

Of course if you use yum install, that will be more efficient. You will need EPEL and the standard repository from CentOS

Thursday, September 8, 2011

Hitachi Data Systems Corporation (HDS) Acquires BlueArc Corporation


Hitachi Data Systems Corporation (HDS), a wholly owned subsidiary of Hitachi, Ltd. (NYSE: HIT / TSE: 6501), on 7 Sep 2011, announced a significant milestone in its strategy to give customers seamless access to all data, content and information with the acquisition of BlueArc Corporation, a leader in scalable, high performance network storage. Building upon a successful 5 year OEM partnership, HDsand BlueArc will give customers the unmatched combination of Hitachi enterprise-class quality, reliability and support with innovative, highly scalable, high performance BlueArc network attached storage (NAS).

Hitachi Data Systems Announces Acquisition of BlueArc

Wednesday, September 7, 2011

Resolving Slow SSH Login

If you are facing slow login times, it might be due to reverse DNS is not responding quick enough. This system can show up on your log file

# tail -50 /var/log/secure


You will notice that there is a time lag from accepting the key to opening a session

Sep  6 10:15:42 santol-h00 sshd[4268]: 
Accepted password for root from 192.168.1.191 port 51109 ssh2

Sep  6 10:15:52 santol-h00 sshd[4268]: pam_unix(sshd:session): 
session opened for user root by (uid=0)

To fix the issue, you should modify the /etc/ssh/sshd_config file

# vim /etc/ssh/sshd_config

At /etc/ssh/sshd_config, change UseDNS  no
#ShowPatchLevel no
UseDNS no
#PidFile /var/run/sshd.pid

Restart the ssh service

# service sshd restart

Feel the login speed :)

Tuesday, September 6, 2011

Interesting Writeup on the Four Pillars of Software-Defined Cloud Networking

Do read this interesting article.

The Four Pillars of Software-Defined Cloud Networking by Arista Network. Here the summary in a table format

        

Saturday, September 3, 2011

Spirit of Startup. Security of Hewlett Packard

I think most of us have heard about Hewlett Packard spinning of its PC Business. Read the official response from Hewlett Packard what they will do with their PC Business. A new Startup?

Here is the message from their Executive Vice President, Personal Systems Group, Todd Bradley, taken from Hewlett Packard "Spirit of Startup. Security of Hewlett Packard"



HP is the #1 PC maker on the planet, and that won't change. I can assure you our future is brighter than ever.

Spirit of a Startup

Our preferred course to harness our vision of the future is to build a separate, more agile company. It's time to think like a startup again. It's time to be nimble and revolutionary. It's time again for world-changing innovation. And so, it's time we realized we're at a crossroads in an evolving HP.

But don't misunderstand: We-the same great
folks who make HP PCs today-will make them tomorrow. We will continue to build on our legacy creating reliable, stylish, and high-performance PCs to improve your personal and professional life.

This is the future we are passionate about, and we hope this site will answer all your questions and leave you feeling as inspired and excited as we are for what lies ahead.

Security of HP
  • We became #1 by focusing on our customers' needs.
  • On its own, our PC business would be the 60th largest Fortune 500 company.
  • We sell two PCs every second.
  • We provide personal computing products, services, and support for customers in over
    170 countries.
  • We have been dedicated to our customers for over 70 years, and we look forward to many,
    many more.