Seminar Topics

Published on January 2017 | Categories: Documents | Downloads: 51 | Comments: 0 | Views: 384
of 9
Download PDF   Embed   Report

Comments

Content

Seminar – Abstract

Submitted by, Akhilesh. B. Chandran

Cloud computing
Cloud computing refers to the logical computational resources (data, software) accessible via a computer network (through WAN or Internet etc.), rather than from a local computer. The on-line service can be offered from a cloud provider or it could be private organization's own. In this case these technologies are regarded by some analysts as a technological evolution, or are seen as a marketing trap by others like Richard Stallman Users or clients can perform a task, such as word processing, mailing, with a client such as browser and with service provided through such cloud based computational resources. Since the cloud is the underlying delivery mechanism, cloud-based remote applications and services may support any type of software application or service in use today. In the past, tasks such as word processing were not possible without the installation of software on a local computer. With the development of local area networks (LAN) and wider bandwidth, multiple CPUs and storage devices could be used to host services like word processing in a remotely managed datacenter. The Cloud computing takes away the installation and upgrades hassles and need for higher computing power from users and gives more control to the service providers on administration of the services. Consumers now routinely use data-intensive applications driven by cloud technology that were previously unavailable due to cost and deployment complexity.[citation needed] In many companies, employees and company departments are bringing a flood of consumer technology into the workplace, which raises legal compliance and security concerns for the corporation.[citation needed] The term "software as a service" is sometimes used to describe programs offered through "The Cloud". A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud service) is "The Cloud". An analogy to explain cloud computing is that of public utilities such as electricity, gas, and water. Centralized and standardized utilities freed individuals from the difficulties of generating electricity or pumping water. All of the development and maintenance tasks involved in doing so was alleviated. With Cloud computing, this translates to a reduced cost in software distribution to providers still using hard mediums such as DVDs. Consumer benefits are that software no longer has to be installed and is automatically updated, but savings in terms of money is yet to be seen. The principle behind the cloud is that any computer connected to the Internet is connected to the same pool of computing power, applications, and files. Users can store and access personal files such as music, pictures, videos, and bookmarks or play games or do word processing on a remote server rather than physically carrying around a storage medium such as a DVD or thumb drive. Even those using web-based email such as Gmail, Hotmail, Yahoo!, a company-owned email, or even an e-mail client program such as Outlook, Evolution, Mozilla Thunderbird, or Entourage are making use of cloud email servers. Hence,

Seminar – Abstract

Submitted by, Akhilesh. B. Chandran

desktop applications that connect to internet-host email providers can also be considered cloud applications.

Cloud Computing visual diagram

Seminar – Abstract

Submitted by, Akhilesh. B. Chandran

Microsoft Kinect
Kinect for Xbox 360, or simply Kinect (originally known by the code name Project Natal), is a "controllerfree gaming and entertainment experience" by Microsoft for the Xbox 360 video game platform. Based around a webcam-style add-on peripheral for the Xbox 360 console, it enables users to control and interact with the Xbox 360 without the need to touch a game controller, through a natural user interface using gestures and spoken commands. The project is aimed at broadening the Xbox 360's audience beyond its typical gamer base. Kinect competes with the Wii Remote Plus and PlayStation Move and PlayStation Eye motion control systems for the Wii and PlayStation 3 home consoles, respectively. Kinect is based on software technology developed internally by Rare, a subsidiary of Microsoft Game Studios owned by Microsoft, and on range camera technology by Israeli developer PrimeSense, which interprets 3D scene information from a continuously-projected infrared structured light. This 3D scanner system called Light Coding employs a variant of image-based 3D reconstruction. The Kinect sensor is a horizontal bar connected to a small base with a motorized pivot and is designed to be positioned lengthwise above or below the video display. The device features an "RGB camera, depth sensor and multi-array microphone running proprietary software", which provide full-body 3D motion capture, facial recognition and voice recognition capabilities. The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor, which captures video data in 3D under any ambient light conditions. The sensing range of the depth sensor is adjustable, and the Kinect software is capable of automatically calibrating the sensor based on gameplay and the player's physical environment, accommodating for the presence of furniture or other obstacles. Described by Microsoft as the primary innovation of Kinect, the software technology enables advanced gesture recognition, facial recognition and voice recognition. Kinect is capable of simultaneously tracking up to six people, including two active players for motion analysis with a feature extraction of 20 joints per player. Kinect holds the Guinness World Record of being the "fastest selling consumer electronics device". It sold an average of 133,333 units per day with a total of 8 million units in its first 60 days. 10 million units of the Kinect sensor have been shipped as of March 9, 2011.

Seminar – Abstract

Submitted by, Akhilesh. B. Chandran

Web service
A Web service is a method of communication between two electronic devices over a network.

The W3C defines a "Web service" as "a software system designed to support interoperable machine-tomachine interaction over a network. It has an interface described in a machine-processable format (specifically Web Services Description Language WSDL). Other systems interact with the Web service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with an XML serialization in conjunction with other Web-related standards."

The W3C also states, "We can identify two major classes of Web services, REST-compliant Web services, in which the primary purpose of the service is to manipulate XML representations of Web resources using a uniform set of "stateless" operations; and arbitrary Web services, in which the service may expose an arbitrary set of operations."

Web services architecture

Seminar – Abstract

Submitted by, Akhilesh. B. Chandran

Text mining
Text mining, sometimes alternately referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text. High-quality information is typically derived through the divising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).

Hierarchical clustering diagram

Seminar – Abstract

Submitted by, Akhilesh. B. Chandran

Electronic money
Electronic money (also known as e-currency, e-money, electronic cash, electronic currency, digital money, digital cash, digital currency, cyber currency) refers to money or scrip which is only exchanged electronically. Typically, this involves the use of computer networks, the internet and digital stored value systems. Electronic Funds Transfer (EFT), direct deposit, digital gold currency and virtual currency are all examples of electronic money. Also, it is a collective term for financial cryptography and technologies enabling it. While electronic money has been an interesting problem for cryptography (see for example the work of David Chaum and Markus Jakobsson), to date, the use of e-money has been relatively low-scale. One rare success has been Hong Kong's Octopus card system, which started as a transit payment system and has grown into a widely used electronic money system. London Transport's Oyster card system remains essentially a contactless pre-paid travelcard. Two other cities have implemented functioning electronic money systems. Very similar to Hong Kong's Octopus card, Singapore has an electronic money program for its public transportation system (commuter trains, bus, etc.), based on the same type of (FeliCa) system. The Netherlands has also implemented a nationwide electronic money system known as Chipknip for general purpose, as well as OV-Chipkaart for transit fare collection. In Belgium, a payment service company, Proton, owned by 60 Belgian banks issuing stored value cards, was developed in 1995. A number of electronic money systems use contactless payment transfer in order to facilitate easy payment and give the payee more confidence in not letting go of their electronic wallet during the transaction.

Seminar – Abstract

Submitted by, Akhilesh. B. Chandran

High-definition video
High-definition video or HD video refers to any video system of higher resolution than standarddefinition (SD) video, and most commonly involves display resolutions of 1,280×720 pixels (720p) or 1,920×1,080 pixels (1080i/1080p). This article discusses the general concepts of high-definition video, as opposed to its specific applications in television broadcast (HDTV), video recording formats (HDCAM, HDCAM-SR, DVCPRO HD, D5 HD, AVC-Intra, XDCAM HD, HDV and AVCHD), the optical disc delivery system Blu-ray Disc and the video tape format D-VHS. High definition video (prerecorded and broadcast) is defined threefold[citation needed], by: 1. The number of lines in the vertical display resolution. High-definition television (HDTV) resolution is 1,080 or 720 lines. In contrast, regular digital television (DTV) is 480 lines (upon which NTSC is based, 480 visible scanlines out of 525) or 576 lines (upon which PAL/SECAM are based, 576 visible scanlines out of 625). However, since HD is broadcast digitally, its introduction sometimes coincides with the introduction of DTV. Additionally, current DVD quality is not high-definition, although the highdefinition disc systems Blu-ray Disc and the defunct HD DVD are. 2. The scanning system: progressive scanning (p) or interlaced scanning (i). Progressive scanning redraws an image frame (all of its lines) when refreshing each image. Interlaced scanning draws the image field every other line or "odd numbered" lines during the first image refresh operation, and then draws the remaining "even numbered" lines during a second refreshing. Interlaced scanning yields greater image resolution if subject is not moving, but loses up to half of the resolution and suffers "combing" artifacts when subject is moving. 3. The number of frames or fields per second. The 720p60 format is 1,280 × 720 pixels, progressive encoding with 60 frames per second (60 Hz). The 1080i50 format is 1920 × 1080 pixels, interlaced encoding with 50 fields per second. Two interlaced fields formulate a single frame, because the two fields of one frame are temporally shifted. Frame pulldown and segmented frames are special techniques that allow transmitting full frames by means of interlaced video stream.

Seminar – Abstract

Submitted by, Akhilesh. B. Chandran

Solid-state drive
A solid-state drive (SSD) is a data storage device that uses solid-state memory to store persistent data with the intention of providing access in the same manner of a traditional block i/o hard disk drive. SSDs are distinguished from traditional hard disk drives (HDDs), which are electromechanical devices containing spinning disks and movable read/write heads. SSDs, in contrast, use microchips which retain data in non-volatile memory chips and contain no moving parts. Compared to electromechanical HDDs, SSDs are typically less susceptible to physical shock, are silent, and have lower access time and latency, but are more expensive per gigabyte (GB) and typically support a limited number of writes over the life of the device. SSDs use the same interface as hard disk drives, thus easily replacing them in most applications.

As of 2010, most SSDs use NAND-based flash memory, which retains memory even without power. SSDs using volatile random-access memory (RAM) also exist for situations which require even faster access, but do not necessarily need data persistence after power loss, or use external power or batteries to maintain the data after power is removed.

A hybrid drive combines the features of an HDD and an SSD in one unit, containing a large HDD, with a smaller SSD cache to improve performance of frequently accessed files. These can offer near-SSD performance in most applications (such as system startup and loading applications) at a lower price than an SSD. These are not suitable for data-intensive work, nor do they offer the other advantages of SSDs.

Seminar – Abstract

Submitted by, Akhilesh. B. Chandran

Web operating system
In metacomputing, WebOS and Web operating system are terms that describe network services for Internet scale distributed computing, as in the WebOS Project at UC Berkeley, and the WOS Project. In both cases, the scale of the web operating system extends across the Internet, like the web.

However, the terms WebOS and Web operating system have been employed more broadly and with far greater popularity in the context of "the web as in HTTP", and for many meanings ranging from singular systems to collections of systems. In April 2002, Tim O'Reilly spoke of "the emergent Internet operating system" as an open collection of Web services.

Common to uses for collections of systems, a Web operating system is distinct from Internet operating systems in that it is independent of the traditional individual computer operating system. This conception of the system reflects an evolution of research in the field of operating systems into the increasingly minimized (for example, TinyOS and Exokernel) and distributed (for example, Inferno), and for distributed systems increasingly defined in terms of the specification of their network protocols more than their implementations (for example, Plan9's 9P).

In a usage referring to singular network services, a Web operating system is another name for a Webtop. These services turn the desktop into a service that runs on the Internet rather than on the local computer. As these services include a file system and application management system, they increasingly overlap with the functionality of a traditional desktop computer operating system.

In a usage referring to desktop (or handheld) computer application environments, a Web operating system is a traditional operating system that is focused on supporting Web applications themselves, or a desktop operating system solely providing Web access. Systems like these also are known as kiosks.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close