David Lee's blog

Docker

David Lee's picture

On June, An open source technology called Docker was strongly introduced by Google uber engineer Eric Brewer, who said that a developer technology hasn’t taken off so enormously since the rise of Ruby on Rails framework. Docker is a Container to package network services, make various tasks isolated on a computer server, preventing them interfering with one another. And much more importantly, this Container can be easily moved to different servers to deploy service without heavily effort.

There is a vision of developers with cloud computing that the internet should be treated as a giant computer which can provide unlimited computing resource. But it is not true. The same service is hard to run on different platforms or hosts. Virtual Machine provide a solution, but it is required to deploy an image of whole the operating system. By comparison, Docker provides an extremely lightweight way to deploy service more quickly and more conveniently.

We must introduce Eric Brewer, one of the elite engineer in Google. In the mid-1990s, as a professor of University of California, Berkeley, Brewer built Inktomi, the first web search engine to run on a vast network of cheap machines, as opposed to one enormously powerful–and enormously expensive–computer server. Over the next two decades, companies like Google, Amazon and Facebook learned on the philosophy of CAP theorem by Brewer, reach to an extreme distance. “He is the grandfather of all the technologies that run inside Google,” says Craig Mcluckie, a longtime product manager for Google’s cloud services.  read more »

Valgrind: the memory error detector

David Lee's picture

C/C++ programming language provide powerful memory operating ability through pointers, and programmers can make efficient and flexible control to low-level memory. However, dynamic memory errors become one of the toughest part to debug. Almost every programmers have suffered segmentation fault or memory leak problem. Those errors which only appear at the run-time and can be detected by the compiler spend a lot of development time. Fortunately, there are many memory profiling tool can help programmers find the bugs effectively. Valgrind, the topic of this article, is one of these convenient tools.

Valgrind is a open source dynamic analysis software, which can be used to detect memory, cache and threading bugs, can use branch-prediction function to profile program performance, or even to plugin external tools to make more detailed program testing. This article only introduces the memory error detection function of Valgrind.

First we need a program to be detected, of course. Suppose we write a source code as follows (this code is cited by the Valgrind website,) and name it “test.c”.  read more »

Libgtop

David Lee's picture

How to get the resource usage of Linux system, such as memory and CPU utilization, at the runtime of a process? We can read the file of system in the directory /proc/<process id>/stat, or we can use the “top” command in the shell; Howerver, extra effort is required with both approach because the file or interface need to be parsed before we use it. Here is another method to get the information about resource usage of entire system or a specific process: Ligbtop, a open source library based on C programming.

Libgtop is a library of GNOME project, used to implement the “top” functionality of the desktop environment. It depends on Glib, another library of GNOME. The latest version of Libgtop is 2.28. Noticed that Glib 2.6.0 and Intltool 0.35.0 or later versions need to be installed before we install Libgtop.

In general, the CPU utilization is caculated according the time CPU spend in different mode. These usually can be divided into user mode, nice mode, system(kernel) mode and idle mode. We can use the API of Libgtop to get the CPU time (clock clicks) of each mode from system boot. For example, the source code below can be used to caculate the CPU utilization.

#include <glibtop>
#include <glibtop/cpu.h>

double cpu_rate;
int dt, du, dn, ds;
glibtop_cpu cpu_begin,cpu_end;
glibtop_get_cpu(&cpu_begin);
sleep(1);
glibtop_get_cpu(&cpu_end);
dt = cpu_end.total - cpu.begin.total;
du = cpu_end.user - cpu.begin.user;
dn = cpu_end.nice - cpu.begin.nice;
ds = cpu_end.sys - cpu.begin.sys;
cpu_rate = 100.0 * (du+dn+ds) / dt

Note that we need to get the clock click count at two different points in time, so the function glibtop_get_cpu is called twice. On the other hand, the monitor of memory utilization is much more simply, as:

#include <glibtop>
#include <glibtop/mem.h>

double mem_rate;
glibtop_mem memory;
glibtop_get_mem(&memory);
mem_rate = 100.0 * memory.used / memory.total;

There are a variety of types of resource can be monitored by Libgtop. In addition to system CPU and memory utilization described above, also includes CPU and memory utilization of specific process, swap, file system, network interface, and so on. The Detail API and data structure can refer to GNOME’s official website: http://developer.gnome.org/libgtop/

Introduction of Google File System

David Lee's picture

Why can Google dominate the search engine market? One important reason is the excellent performance relies on the file system. Google has designed a unique distributed file system to meet its huge storage demand, known as Google File System (GFS). Google did not release GFS as open source software, but still released some technical details, including an official paper.

There are two mainly differences between GFS and traditional distributed file system. First, component failures are the norm rather than the exception. The failures can be caused by application bugs, operating system bugs, human errors, and even hardware or network problems. Since even the expensive hard disk device can not completely exclude all failures, Google just simply builds its storage machine by mutiple inexpensive comodity, and against failures through integrate constant monitoring, error detection, fault tolerance, and automatic recovery to GFS.

Second, most files are mutated by appending new data rather than overwriting or removing existing data. Once written, data are usually need only to be readable but not writable. And most reading operations are “large streaming reads”, where individual operations typically read hundreds of KBs, more commonly 1 MB or more. Notice that the system stores a modest number of large files, each typically 100 MB or larger in size. GFS support small files, but does not optimize for them.

The architecture of GFS similar to the supernode (Master) and distributed nodes (chunkservers) approach. Real data will be stored in chunkservers, which report their state to Master periodically. When a client wants to read a file, it query to Master about the state of target chunkserver, and Master response the location of chunkserver if it is in idle stage. Hence the client can request chunk data to the chunkserver.

GFS supports the large volume and flows for Google search engine. On the other hand, BigTable, a database system used by a number of Google applications such as Gmail, Google Maps, Youtube and other cloud services, also built on GFS. We can say that GFS is the killer technology in the cloud generation.

More details can refer to the GFS: http://labs.google.com/papers/gfs.html

LDAP

David Lee's picture

Consider two different issues: First, a huge organization with thousands of members, many departments and IT resources. How to maintain an updatable and accessible online address book for it? Second, a MIS staff need to maintain different sets of username and password for a number of different systems, such as linux login, apache, samba, mail service, etc.) How to make his work easily? These two issues seem irrelevant, but can be served by the same solution: LDAP (Lightweight Directory Access Protocol).

LDAP is a protocol for accessing online directory service, based on X.500. It omitted many complicated details of X.500 protocol to be a flexible and lightweight network application protocol build on IP networks. For the first issue above, with the flexible design LDAP allows us to catalog different type of resources to be a distributed online database. And, for the second issue, it also provides a standardized interface for referring to difference applications, thus integration with different configuration of those applications can be easily.

With the macro perspective, LDAP constructs multiple data to be a tree structure, called DIT (Directory Information Tree). A DIT can be cut into many sub-trees, each of them can be stored in a different LDAP server to achieve the distributed architecture. Each record in DIT can be replaced by a unique distinguished name (DN). As the “absolute path” in general file systems, DN is used to identifier the address in DIT.  read more »