June Huang's blog

CSS3 Gradient Buttons

June Huang's picture

The gradient property in CSS3 allows us to display smooth transitions of colors on page elements without the use of images or JavaScript. This property is now supported by major browsers like Internet Explorer, Firefox, Chrome, and Safari. Below I will demonstrate how to use this property to create gradient buttons.

Button CSS:

.button {

display: inline-block;
padding: 6px 22px;
-webkit-box-shadow: 1px 1px 0px 0px #FFF6AD;
-moz-box-shadow: 1px 1px 0px 0px #FFF6AD;
box-shadow: 1px 1px 0px 0px #FFF6AD;
-moz-border-radius: 5px;
-webkit-border-radius: 5px;
border-radius: 5px;
border: 1px solid #FFCC00;
color: #333333;
font: 14px Arial;
text-shadow: 1px 1px 0px #FFB745;

}  read more »

Cookies

June Huang's picture

Cookies, in short, are pieces of information that are stored on your computer when you visit a website. Websites use cookies to keep track of your activities on the website, for example, your login state. When you browse through or revisit the website, the website's cookie data are sent back to the website so that certain activity-related information can be displayed to you. Below I shall provide some examples of using cookies and discuss the privacy concerns regarding them.

Websites can use cookies to track user activity, save preferences and gather information about the user. The following are some examples where cookies are used: E-commerce websites like Amazon, use cookies to remember the items that customers put in their shopping carts. In doing so, customers do not necessarily have to log in before they are able to shop on the website. Cookies can be used to store customizations like language, localization settings and interface layout for the users' needs and convenience. Cookies are also commonly used for personalized advertising, for example, advertisements on Google and Facebook. Various information about your interests can be gathered through recent searches and the links you click. Once advertisers learn these information, they can offer advertisements that would appeal to you so that you are more tempted to click on the advertisement.

Although cookies are simply just text files and can be deleted by the user at any time, there are privacy issues associated with some of its uses. Most people are not aware that cookies are used at all and that their personal data are being collected. Even if they did know, they will have no control over what third party websites do with their information. Browsers offer cookie settings that lets users allow or disable cookies for specific or all websites. Disabling cookies for all websites might not be a good idea since some websites require cookies in order to function. Lastly, remember that cookies are not shared between different browsers so you will have to edit the cookie settings individually on each browser.

References:
[1] HTTP cookie. (2012, September 17). In Wikipedia, The Free Encyclopedia. Retrieved 11:14, September 19, 2012, from http://en.wikipedia.org/w/index.php?title=HTTP_cookie&oldid=513075176
[2] How to Enable Cookies. Amazon: Help. Retrieved 16:55, September 19, 2012, from http://www.amazon.com/gp/help/customer/display.html?ie=UTF8&nodeId=200156940
[3] Advertising privacy FAQ. Google: Policies & Priciples. Retrieved 17:16, September 19, 2012, from http://www.google.com/policies/privacy/ads/
[4] Cookies, Pixels, and Similar Technologies. Facebook: Help Center. Retrieved 17:48, September 19, 2012, from http://www.facebook.com/help/cookies

Web Application Frameworks

June Huang's picture

Due to the growing use of the Web and web services, web sites in the Web 2.0 era no longer support only static content. Site content has become dynamic so that users can perform real-time tasks such as checking and sending mail. The scale of our web projects becomes vast and it becomes complex to maintain as new features are continually added.

Web application frameworks provide a software architectural model that aid us to organize and manage the different components of our web application. They also provide some useful libraries for example: accessing the database, rendering templates and managing sessions.

Many web application frameworks use a Model-View-Controller (MVC) architecture that defines the logical components of the web application. The following are the explanations of each model, view and controller:

Model
The application model is used to handle the data of the system. In other words, it includes the data and functions that are used to manipulate the data. Controllers and views obtain and change data with the model.

View
The view is the rendered component of the application that is seen by the user, in other words, the user interface. The user uses the user interface to interact with the application.

Controller
Controllers are used to handle requests from the user and returns the response to the user. It obtains the required data from the model, prepares it into a suitable format, inserts the data in the view and renders the view for the user.

A typical request to the server happens as follows: The user interacts with user interface and a request is sent to the server. The main controller handles the request by determining the appropriate delegate controller and passes the control to that controller. The delegate controller interacts with the model to gather or update data for the view, renders the view and returns the control to the main controller. The main controller responds with the rendered view. The cycle repeats when the user interacts with the user interface and sends a new request.


References:
[1] Web application framework. (2011, May 28). In Wikipedia, The Free Encyclopedia. Retrieved 15:23, May 30, 2011, from http://en.wikipedia.org/w/index.php?title=Web_application_framework&oldid=431373642
[2] Model–view–controller. (2011, May 26). In Wikipedia, The Free Encyclopedia. Retrieved 17:12, May 30, 2011, from http://en.wikipedia.org/w/index.php?title=Model%E2%80%93view%E2%80%93controller&oldid=430946706

Web Crawlers - Crawling Policies

June Huang's picture

Continuing from my last blog entry on web crawlers, let me now give a more detailed explanation as to how web crawlers traverse the Web. Web crawlers use a combination of policies to determine their crawling behavior, such policies include a selection policy, a revisit policy, a politeness policy and a parallelization policy. I shall discuss each of these as follows.

As only a percent of the Web can be downloaded, a web crawler must use a selection policy to determine which resources are relevant to download. This is more useful than downloading a random portion of the Web. An example of a selection policy is the PageRank policy (Google) where the importance of a page is determined by the links to and from that page. Other examples of selection policies are based on the context of the page and the resources’ MIME type.

Web crawlers use revisiting policies to determine the cost associated with an outdated resource. The goal is to minimize this cost. This is important because resources in the Web are continually created, updated or deleted; all within the time it takes a web crawler to finish its crawl through the Web. It is undesirable for the search engine to return an outdated copy of the resource. The cost to revisit the page are based on freshness and age, where freshness focuses on whether or not the local copy is the current copy of the resource and age focuses on how long ago the local copy was updated.

The politeness policy is used so that the performance of a site is not heavily affected whist the web crawler downloads a portion of the site. The server may be overloaded as it has to handle the requests of the viewers of the site as well as the web crawler. Solutions proposed to alleviate the load are: introducing an interval that restricts the web crawler from overloading server with requests and the robot exclusion protocol where the administrators indicate which portions of the site are not to be accessed by the crawler.

Parallelization policies are used to coordinate multiple web crawlers crawling the same Web space. The goal is to maximize the download rate of the resources as well as refraining the web crawlers from downloading the same pages.

[1] Web crawler. (2011, February 22). In Wikipedia, The Free Encyclopedia. Retrieved 16:24, March 4, 2011, from http://en.wikipedia.org/w/index.php?title=Web_crawler&oldid=415343979

Web Crawlers

June Huang's picture

Looking up information on the Internet has become a daily task for many of us. Thanks to the invention of search engines, it is not laborious to do. Search engines are convenient to use as they produce immediate results from countless sources. From Web pages to images and videos, we are able to search through almost everything anyone can ever find in the Web. To be able to return results, search engines first make use of a computer program called a web crawler that explores the resources on the Web. Web crawlers look at the pages’ contents and store information about the page so that when the user requests something, the search engine can find related resources and return them to the user. In this article I shall give a brief introduction of how search engines manage and find what we are interested in.

To begin with, the web crawler is given a list of URLs. The crawler visits a page and identifies keywords and links. It then determines which pieces of information are worth adding or updating. The web crawler will then download a portion of that page and index some metadata, for example the page’s URL, for future searches. The newly found links are then added to the list of URLs for the crawler to continue exploring.

Web crawlers have to select which pages it should visit to obtain information because there are infinitely many pages on the Internet and pages can be constantly added, modified or deleted. Policies are used to determine whether a page is worth visiting as it is impractical to visit every single page in the Web and possibly visiting it multiple times to check for updates. An example of a policy is Google’s PageRank policy that weighs the importance of a page depending on the links to the page and the PageRank of those pages. The number of pages that link to a specific page represents the page’s importance and therefore contributes to its PageRank. The higher the PageRank the more the page is worth indexing. Distributed web crawling is also used to share the URLs for exploring and page download so as to optimise the crawl through the Web.

References:
[1] Web crawler. (2010, December 22). In Wikipedia, The Free Encyclopedia. Retrieved 11:21, December 29, 2010, from http://en.wikipedia.org/w/index.php?title=Web_crawler&oldid=403711331
[2] PageRank. (2011, January 2). In Wikipedia, The Free Encyclopedia. Retrieved 11:15, January 6, 2011, from http://en.wikipedia.org/w/index.php?title=PageRank&oldid=405547279

A Search Engine for Your Personal Cloud

June Huang's picture

Accessing the myriad of information on the Web has been made possible with web search engines. Nowadays, cloud technology changes the way people interact with the Web, for example social networking and data storage. Our personal cloud grows and keeping track of what goes on and where things occur becomes a relevant issue. Consider all the information that you and your social networks create in one day. E-mails, calendar events and conversations are all such examples of your social streams and remembering everything that happens is impossible. How can we effectively find things in our own cloud? The answer is: a search engine for your personal cloud.

Greplin and Introspectr are two such services that allow users to filter through their personal data. They offer indexing of your social-networking services like Facebook and Twitter, mailboxes like Gmail (including attachments and links) and even file-sharing services such as Dropbox and Google Docs. To use their services, simply type in your query and they will return all the occurrences of your search regardless of which streams they appeared from.

The main difference between Greplin and Introspectr is that Greplin offers real-time indexing approximately every 20 minutes. With Instrospectr, you will have to update the index manually. There is also a known issue where Greplin does not index contents of external URLs from tweets where as Introspectr does [2]. Both Greplin and Introspectr allows you to index a variety of services, however, log-in information is required to specify a service you want indexed. It is obvious that safety and privacy becomes a concern. Greplin states that they use OAuth to retrieve only the data and they do not have access to your log-in information [3].

Greplin and Introspectr offer a convenient and centralized way for users to filter through the contents of their cloud. Their services can be accessed on almost all devices with an Internet connection and searching through social feeds become just as easy as an e-mail or hard-drive search.

[1] Arrington, M (Aug 31, 2010). The Other Half Of Search: Greplin Is A Personal Search Engine For Your Online Life. Retrieved on October 26, 2010, from http://techcrunch.com/2010/08/31/greplin-ycombinator-personal-search/
[2] Schonfeld, E. (Oct 12, 2010). Introspectr Searches Your Social Streams. Retrieved on October 26, 2010, from http://techcrunch.com/2010/10/12/introspectr-search-social/
[3] Greplin: https://www.greplin.com/
[4] Introspectr: https://www.introspectr.com/

Smartphone Security - Risks and Preventions

June Huang's picture

Mobile phones are rapidly evolving and becoming capable of performing tasks that were once predominantly accomplished on computers. These days, smartphones are powerful enough to accomplish your on-the-go needs and could easily replace your netbooks, music players and handheld game systems. However, its increased functionalities and extensive features are also reasons why smartphones are just as prone to attacks as any personal computer.

The most common risk is: losing your phone. What happens to all your personal information when you lose your phone? The person who picks up your phone has access to all your contacts, smses, e-mails and can even tell which banks you use by looking at which bank’s applications are installed on your phone. Never store sensitive information, such as passwords, on your phone and when you set a password, use a strong password so that it cannot be guessed.

Be cautious when clicking links while browsing the web or opening smses and e-mails (including their attachments) from unknown senders. We know not to trust these links and attachments from using the computer and the same rules should apply for smartphones. Untrustworthy links may take you to malicious websites that are able to retrieve information from your phone and the attachments may contain viruses that can spread to the people in your contact list.

Applications, even though adding to the functionality and usability of smartphones, also present a security threat. As applications are developed by third-parties, containment and control of deceitful applications remain an issue. Although Google and Apple make an attempt to remove malware from their application stores, detection is difficult and actions are usually taken after the harm has been done. Thus, users should be careful of who the developers are and be attentive as to which services the applications are requesting access to. In a particular instance, a number of Android users fell victim of a sms trojan disguised as a media player. These users were unaware of the media player requesting for smsing features and installed the trojan, thus racking up an expensive telephone bill. Jailbreaking phones, as seen in jailbroken iPhones, can also create security holes for malware to attack.

An interesting experiment, done by a group of researchers from the University of Pennsylvania, tested that passcodes of smartphones can be identified with the smudges on the touchscreen. The experiment showed that 92% of the smudges were partially identifiable and as high as 68% of the smudges were fully identifiable. So in the event of oily smudges, you may want to wipe your screen after use.

One thing for sure, like when you purchase a new computer or when you reinstall your operating system, you will install antivirus and firewall programs to protect your computer. Do the same for your smartphones as there are applications in the application stores that can provide these services for your smartphone, free and paid.

[1] Mills, E. (January 5, 2010). Using your smartphone safely (FAQ). Retrieved on August 30, 2010, from http://news.cnet.com/8301-27080_3-10424759-245.html
[2] Constantin, L (August 10, 2010). Premium SMS Trojan Targets Android Users. Retrieved on August 30, 2010, from http://news.softpedia.com/news/Premium-SMS-Trojan-Targets-Android-Users-151563.shtml
[3] Bradley, T (August 11, 2010). Smartphone Security Thwarted by Fingerprint Smudges. Retrieved on August 30, 2010, from http://www.networkworld.com/news/2010/081110-smartphone-security-thwarted-by-fingerprint.html

Cloud-Based Gaming with OnLive

June Huang's picture

With OnLive's on-demand gaming service, PC gaming has gotten cheaper and easier. There is no need to have high-end computers with fancy graphics cards and fast CPUs to play the latest videos games. Low-end computers running Windows XP, Windows Vista, or Intel-based Macs running OS X together with a decent Internet connection is enough to get started. Users can also gain access to OnLive using their televisions with the OnLive MicroConsole. OnLive enables users to play or rent games, try out game demos and play multi-player games with other users using the OnLive service. There are also community features such as speculating live games, recording and sharing gameplay videos and accessing gamer profiles.

Essentially, the user just needs to know how to use a browser. Game data and interactions are sent from the browser to the OnLive servers for processing. Once all the data has been computed, a compressed video stream is sent back to the user's browser and the user continues to play the game. To the user, the gameplay is real-time and will feel no different than playing with a local copy of that game. It is convenient for the user, because OnLive eliminates installing and updating the games and the need for local storage space.

The OnLive service achieves this instant access with a combination of remote servers working dedicated or shared to produce continuous gameplay for the users. Game data are stored and processed on these servers and their hardware are upgraded every six months to provide users with optimal processing power. Each server has a particular task like handling the user interface, running the games and streaming video. There are also several classes of servers depending on the requirements of the computations and the number of connections. Thus, during a session, a user is passed to several servers depending on their state of play and processing requirement.

With all the data transmissions happening in the background, it is obvious that the OnLive service is dependent on and limited to the user's broadband connection and region. OnLive claims that high-definition quality is achievable with video at up to 1280x720 resolution and a frame rate up to 60 frames per second with a connection of at least 5 Mbps. However, the slower the speed, the lower the resolution and frame rate. With a 1.5 Mbps connection, a standard-definition quality is obtainable, but it may be insufficient to play a real-time, action-packed game because the video feedback may not be as smooth as playing on a local machine. Also, with the compression of the video, some of the art details in the scenes are lost.