Showing posts with label Business. Show all posts
Showing posts with label Business. Show all posts

Thursday, February 14, 2013

Another Blow For BlackBerry As New Zealand Cops Pick iOS Devices

blackberry logo

In another setback for BlackBerry’s key government business, the New Zealand police force has chosen iOS devices over smartphones and tablets running competing operating systems. Kiwi cops will be kitted out with iOS devices after spending nearly a year testing iPhones and iPads against models running BlackBerry and Android, reports the National Business Review.


New Zealand Prime Minister John Key and Police Minister Anne Tolley announced that 6,000 frontline officers will receive an iPhone, while 3,900 will also get an iPad, in the initial rollout. The decision came after 100 staff members spent 11 months testing devices.


New Zealand police chief information officer Stephen Crombie said that Apple’s products were chosen because it is easier to upgrade to newer iOS phones and tablets:


“Based on frontline officer feedback from the trial (over 100 staff in four districts trialled smartphones, laptops and tablets over an 11-month period) the preferred devices are the iPhone as smartphone and iPad for the tablet. The approach used to develop the applications means Police can move to other devices with relative ease as technology changes.”


The initial rollout over the next three months will cost $4.3 million NZ, or about $3.75 million USD. In the next 10 years, the program will cost $159 million NZ ($134.7 million USD), but the police claim that the investment will reap productivity benefits of $305 million NZ ($258.5 million USD) over the decade.


The move comes as a chunk of the New Zealand police force switch carriers from Telecom to Vodafone. Vodafone won a 10-year outsourced deal, which represents new business for the company. Crombie told the National Business Review that Telecom’s Gen-i division, which had previously been the force’s sole carrier, will continue to supply mobile services for operational management and administrative staff. Over the next year, however, Crombie said that police will be “working to determine how many of these mobiles will move to the arrangement with Vodafone.”


The New Zealand police force’s decision is yet another setback for BlackBerry in Oceania. Earlier this month, Australia’s Treasury Department said it would replace 250 BlackBerry devices with the iPhone 5 after the Defence Signals Directorate certified iOS for government use. The rollout is expected to be completed by the end of March. The Treasury Department’s chief information officer said the decision was made in spite of BB10′s launch because “BlackBerry has pretty limited capability. With the new one being launched, it’s almost too late. Maybe it’ll catch up, maybe it won’t.”


More government agencies are switching away from BlackBerry devices–something that should worry the company formerly known as Research In Motion if it wants to hold onto its core government business. Last October, U.S. Immigration and Customs Enforcement chose the iPhone as its new mobile platform, with 17,676 ICE employees receiving iPhones instead of BlackBerrys. The agency followed the Federal Air Marshall Service, the Coast Guard, the Bureau of Alcohol, Tobacco, Firearms and Explosives, the Transportation Security Administration, the Air Force, and the Federal Aviation Administration as U.S. federal agencies that had either switched away from BlackBerry or started offering their employees alternative devices.


A recent Gartner report showed that in Q4 2012, BlackBerry held just 3.5% of the global market share for smartphones, down from 8.8% in the same period a year earlier.


BlackBerry seems well aware of the problem-earlier this month its vice president of government solutions, Paul Lucier, told Government Technology that BlackBerry’s stringent security standards inadvertently drove customers away.


“They locked [BlackBerry devices] down so much that people were really only using them for email, very basic features. As the BYOD trend started to take off across enterprise, government included, it posed a big challenge. People were comparing a brand-new device on the market that had all the bells and whistles with a locked-down BlackBerry,” Lucier said.





Monday, February 4, 2013

Electronic business

electronic busuness

Electronic business, commonly referred to as “eBusiness” or “e-business“, or an internet business, may be defined as the application of information and communication technologies (ICT) in support of all the activities of business. Commerce constitutes the exchange of products and services between businesses, groups and individuals and can be seen as one of the essential activities of any business. Electronic commerce focuses on the use of ICT to enable the external activities and relationships of the business with individuals, groups and other businesses.[1]

e-business may be defined as the conduct of industry,trade,and commerce using the computer networks.The term “e-business” was coined by IBM’s marketing and Internet teams in 1996.

Electronic business methods enable companies to link their internal and external data processing systems more efficiently and flexibly, to work more closely with suppliers and partners, and to better satisfy the needs and expectations of their customers. The internet is a public through way. Firms use more private and hence more secure networks for more effective and efficient management of their internal functions.

In practice, e-business is more than just e-commerce. While e-business refers to more strategic focus with an emphasis on the functions that occur using electronic capabilities, e-commerceis a subset of an overall e-business strategy. E-commerce seeks to add revenue streams using the World Wide Web or the Internet to build and enhance relationships with clients and partners and to improve efficiency using the Empty Vessel strategy. Often, e-commerce involves the application of knowledge management systems.

E-business involves business processes spanning the entire value chain: electronic purchasing and supply chain management, processing orders electronically, handling customer service, and cooperating with business partners. Special technical standards for e-business facilitate the exchange of data between companies. E-business software solutions allow the integration of intra and inter firm business processes. E-business can be conducted using the Web, the Internet, intranets, extranets, or some combination of these.

Basically, electronic commerce (EC) is the process of buying, transferring, or exchanging products, services, and/or information via computer networks, including the internet. EC can also be beneficial from many perspectives including business process, service, learning, collaborative, community. EC is often confused with e-business.
Enhanced by Zemanta

Sunday, January 20, 2013

NCH Corporation

NCH Corporation

NCH Corporation is a major international marketer of maintenance products, and one of the largest companies in the world to sell such products through direct marketing. NCH's products include specialty chemicals, fasteners, welding supplies, pet products, and plumbing parts. These products are sold through a number of wholly owned subsidiaries, many of which are engaged in the maintenance products business. Subsidiary companies in NCH's Chemical Specialties division produce a diverse array of maintenance chemicals that includes cleaners, degreasers, lubricants, grounds care, housekeeping, and water treatment products. Companies in the Partsmaster group offer a wide variety of items for maintenance and repair, including welding supplies and fasteners. The Plumbing Products Group provides plumbing supplies for the do-it-yourself retail consumer and the OEM market. The Retail Products Group markets a wide range of pet supplies. Other subsidiary groups under the NCH umbrella include X-Chem, an oil field services division, and Pure Solve, a partswashing service business. NCH has over 8,500 employees. Its branch offices and manufacturing plants are located on six continents, and its products are sold in over 50 different countries.

History


National Disinfectant Company, the original incarnation of NCH Corporation, was founded in Dallas, Texas, by Milton P. Levy in 1919. Leadership of the company has remained in the hands of the Levy family to this day. National Disinfectant's original line of products was fairly small; it included a coal tar disinfectant, an insecticide, and a liquid hand soap for institutional use. The company was a small, efficient operation, and orders received in the morning would be delivered in the afternoon of the same day. During the next couple of decades the company's offerings grew. One brand that appeared in the late 1930s was Everbrite, a heavy-duty industrial floor wax. Everbrite has continued to exist in varying forms since then, eventually evolving into a strong multi-purpose cleaner that kills bacteria.

Levy's three sons, Lester A., Milton P., Jr., and Irvin L., were involved in the company's operations from early on, working in the warehouse and shipping areas as teenagers and learning the business from the ground up. When the senior Levy died in 1946, the family was prepared to continue running National Disinfectant. Levy's widow, Ruth, took over as president of the company. Lester Levy was placed in charge of the company's small but growing sales crew. Milton, Jr., began to integrate the development of a sales territory in Austin, Texas, with the completion of his studies there at the University of Texas. Irvin, after working part-time as office manager while he finished school at Southern Methodist University, began developing another sales area in the Dallas-Fort Worth region. In 1947 company sales were $300,000. The Levy's were assisted in running the company by Jack Mann, National Disinfectant's top sales representative since joining the company in the 1920s. Mann, a former vaudeville entertainer and a close friend of Milton, Sr., would stay with the company for 40 years. The company's Mantek chemical division was named after him shortly after his death in 1968.

In the 1950s National Disinfectant began to integrate vertically and to expand its marketing area. The company began to reinvest a sizeable portion of its profits in manufacturing and research facilities in order to decrease its reliance on outside producers for its wares. One important acquisition that was made in the early 1950s was Certified Laboratories. Certified continued to operate as an independent company with its own brand name and its own sales force, but this wholly owned subsidiary was generating over one-fourth of the company's revenue within a few years. By the middle of the decade, National Disinfectant was shipping its products via rail to several points outside of Texas, with new concentrations of customers in Oklahoma, Louisiana, Arizona, and New Mexico. St. Louis was the site of the company's first branch office, established in 1956.

As demand for National Disinfectant's products grew, so did its sales force. A sales management team was created during this period, and the training of new sales representatives became more standardized. National Disinfectant manufacturing plants began to spring up across the United States, first in Texas, and later regional plants appeared in New Jersey, California, Puerto Rico, and Indiana.

In 1960 the company's name was changed to National Chemsearch Corp. in order to better reflect the expansion of its product line beyond disinfectants. National Chemsearch began to go international during the 1960s. Its first overseas sales endeavors were in the Caribbean. Sales efforts soon spread to Canada and to Central and South America. Eventually, the company landed in Europe as well. In 1962 the company's administrative offices, along with laboratories and manufacturing operations, were moved to a new headquarters located in Irving, Texas, a suburb of Dallas. National Chemsearch acquired two more subsidiaries in the first half of the 1960s. Hallmark Chemical Corp., which sold a line of building products, was acquired in 1962. Two years later, the company purchased Lamkin Brothers, Inc., a marketer of vitamin and mineral supplements for livestock. National Chemsearch offered its stock to the public for the first time in 1965. The Levy family retained control of 70 percent of the stock. By that time, Ruth Levy had retired, and a clear division of labor existed among the three brothers. Lester, chairman of the board, oversaw corporate planning and much of the company's financial dealings. Milton handled production, distribution, and product development as chairman of the executive committee. And company president Irvin was in charge of expanding the company's domestic and foreign sales efforts.

Between 1962 and 1966 National Chemsearch's sales grew at an average rate of 29 percent a year. By the end of that stretch, the company was earning $2.4 million on sales of $25 million. Much of the company's success was attributed to its direct sales methods, which eliminated the need for wholesalers or other intermediaries. By offering a broad range of products to a large number of customers (many of them relatively small shops and plants), National Chemsearch was able to compete favorably with larger companies that were concentrating on selling only very large orders.

By 1967, Chemsearch employed more than 600 sales representatives. None of the company's 40,000 customers accounted for even one percent of its sales. About 60 percent of these customers were industrial or commercial clients; the rest were institutions such as hospitals and schools. The greatest share of sales (over 60 percent) was still coming from cleaning chemicals at that time. Constant research was adding about 20 products a year to the line. Toward the end of the 1960s, the Plumbmaster and Partsmaster divisions were created. The establishment of these divisions meant that the growing number of newly acquired subsidiaries could be grouped according to the nature of their products.

In February 1969 National Chemsearch stock was listed on the New York Stock Exchange for the first time. In 1970 the company's product line included roughly 250 items, sold under the trade names "National Chemsearch," "Certified," "Mantek," and "Dyna Systems" (fasteners). Turf maintenance supplies, paints and sealers, and sewage treatment chemicals were among the items offered, in addition to the growing list of cleaning chemicals. Sales and profits continued to grow slowly but surely into the early 1970s. By 1971, sales had reached $69 million, with net income of $6.6 million. About 20 percent of the company's revenue was being generated through foreign sales by this time. Among National Chemsearch's acquisitions during this period were P & M Manufacturing Company of Los Angeles in 1970 and the Pennsylvania-based Daniel P. Creed Co., Inc., in 1972. P & M, with annual sales of about $1.5 million in the plumbing maintenance industry, was acquired for 8,686 shares of common stock. Daniel P. Creed, also in the plumbing supply business, was a cash purchase.

By 1973, sales at National Chemsearch had soared to $103 million. About 3,000 sales representatives were hawking the company's products by the middle of the 1970s. In 1977, specialty chemicals accounted for about 90 percent of sales. The remaining 10 percent was derived from the younger segments of the company, including fasteners, plumbing parts, and welding supplies. National Chemsearch's goal of reducing reliance on outside manufacturers had more or less been achieved by this time, as nearly all of the company's specialty chemicals were being fabricated at its own facilities, the exception being its turf maintenance products.

Annual sales doubled again by 1978, breaking $200 million for the first time. The company's name was changed to NCH Corporation that year. As was the case with the previous name change, the intent was to reflect the increasing diversity of the company's wares. NCH's acquisitions around this time included the 1978 purchase of Specialty Products Co., a manufacturer of specialty plumbing items. Specialty Products, based in Stanton, California, had yearly sales of about $4 million. The following year, NCH acquired the domestic assets of American Allsafe Co. This acquisition paved the way for the development of the company's safety equipment division, whose mission was to supply items such as eye and head protection gear to the increasingly safety-conscious industrial world. 1979 also marked the launch of Kernite SA, a new trading company set up by NCH in Belgium dealing in chemicals, petrochemicals, and lubricants.

NCH's previously steady growth in sales stalled somewhat in the first half of the 1980s. After reaching a high of $356 million in 1981, sales actually declined in each of the next three years, and did not surpass the 1981 figure until 1986, when $375 million in sales was reported. One obvious reason for this stagnation was a generally sluggish global economy, in which maintenance supplies were easy targets for the cost-cutting efforts of struggling industrial firms. Also, the first-year turnover rate among NCH sales representatives was much higher than usual due to slow sales accompanied by higher gas and car maintenance costs, which are borne by the sales personnel. The size of the sales force was stuck at about 4,000 throughout the first half of the decade.

In 1986 NCH added direct mail, telemarketing, and catalog sales to its arsenal of marketing techniques. Cornerstone Direct was formed for this purpose, offering material handling equipment, first-aid kits, and other industrial supplies. Sales growth returned in the second half of the 1980s, breaking $400 million in 1987 and $500 million in 1988. European operations contributed more and more to the company's sales and income during this period. With sales up and expenses down, NCH's earned income from Europe quadrupled between 1987 and 1989, from $4.8 million to $18.8 million. Another area that expanded significantly in the last few years of the decade was the company's Resource Electronics Division, with the acquisition of three electronic parts distributors between 1988 and 1990.

Sales and income reached new peaks of $677 million and $43 million in 1991, before dropping slightly in 1992. One major cost incurred by the company in 1992 was the restructuring of its Brazilian subsidiary, a downsizing made necessary by the phenomenal rate of inflation and general instability of the Brazilian economy. A new plant was built in Korea in 1992, making it possible to offer a broader range of products in the growing Asian market. Among NCH's acquisitions that year was a line of stainless steel flexible tubing connectors. These new products were marketed under the trade name Aqua-Flo. By the end of fiscal 1992, NCH's plumbing group was offering a total of more than 80,000 different parts. The Resource Electronics group's line had grown to over 40,000 parts by this time as well. The company also expanded its line of retail products, which by this time included Outright brand pet care products, Out! International pet odor eliminators, and Totally Toddler nursery care items. A variety of plumbing and hardware supplies for do-it-yourselfers also became available in retail outlets.

In 2002 the Levy Family committed to ensuring the long term stability of NCH by purchasing 100% of the public shares. This ended the company's 37 year history as a publicly traded company.

NCH Corporation's major strengths are the diversity and quality of its products, along with the well-planned organization of its huge army of direct sales representatives. The company has a history of choosing its acquisitions carefully, and of investing wisely in its manufacturing and research facilities, a crucial commitment given the competition NCH faces in the industrial supply business from larger corporations. Since NCH managed to thrive during several of the toughest years for industry in recent history, the company's continuing growth in the global market seems likely.

Source

Creative Commons Attribution-ShareAlike License

Mashable

mashable


Mashable (Mashable Inc.) is a Scottish-American news website and Internet news blog founded by Pete Cashmore. The website's primary focus is social media news, but also covers news and developments in mobile, entertainment, online video, business, web development, technology, memes and gadgets. Mashable was launched by Pete Cashmore from his home in Aberdeen, Scotland in July 2005.

With a reported 50+ million monthly pageviews and an Alexa ranking under 300, Mashable ranks as one of the world's largest websites. Timenoted Mashable as one of the 25 best blogs in 2009, and has been described as "one stop shop" for social media.[10] As of March 2012, it has over 2,775,000 Twitter followers and over 838,400 fans on Facebook.

Mashable Connect conference


Mashable Connect is an annual invite-only conference. It was held on 12 May – 14 May, 2011, with 300 attendees. Speakers included Scott Belsky, Founder & CEO, Behance Rohit Bhargava, SVP, Global Strategy & Marketing, Ogilvy. Sabrina Caluori, Director of Social Media & Marketing, HBO, and Greg Clayman, Publisher, The Daily.


Themes discussed included content curation, the democratisation of content, social media, social television, and helping consumers deal with content overload.

Acquisition


Various news related to the acquisition of Mashable has already been on the web, with AOL being the main name coming out every time. But recently[when?] various sites including Reuters have reported that the site would soon be acquired for $200 million by CNN, which also provided a video.


Softwarelint compares mashable over Cnet

 

Source

Creative Commons Attribution-ShareAlike License

Mashable



 

Saturday, November 19, 2011

Google News: Europe fears credit squeeze as investors sell bond holdings

Google News
Economic Times - ‎1 hour ago‎
Nervous investors around the globe are accelerating their exit from the debt of European governments and banks, increasing the risk of a credit squeeze that could set off a downward spiral.
more »



Browse all of today's headlines on Google News
Enhanced by Zemanta

Monday, October 3, 2011

Data Centers

Power triangle The components of AC powerImage via Wikipedia
A data center (or data centre or datacentre or datacenter) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices.
Contents [hide]
1 History
2 Requirements for modern data centers
3 Data center classification
4 Design considerations
4.1 Environmental control
4.1.1 Metal Whiskers
4.2 Electrical power
4.3 Low-voltage cable routing
4.4 Fire protection
4.5 Security
5 Energy use
5.1 Greenhouse gas emissions
5.2 Energy efficiency
6 Network infrastructure
7 Data Center Infrastructure Management
8 Applications
9 See also
10 References
11 External links
[edit]History

Data centers have their roots in the huge computer rooms of the early ages of the computing industry. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised, such as standard racks to mount equipment, elevated floors, and cable trays (installed overhead or under the elevated floor). Also, a single mainframe required a great deal of power, and had to be cooled to avoid overheating. Security was important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised.
During the boom of the microcomputer industry, and especially during the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, companies grew aware of the need to control IT resources. With the advent of client-server computing, during the 1990s, microcomputers (now called "servers") started to find their places in the old computer rooms. The availability of inexpensive networking equipment, coupled with new standards for network cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition about this time,
The boom of data centers came during the dot-com bubble. Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide businesses with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results.
As of 2007, data center design, construction, and operation is a well-known discipline. Standard Documents from accredited professional groups, such as the Telecommunications Industry Association, specify the requirements for data center design. Well-known operational metrics for data center availability can be used to evaluate the business impact of a disruption. There is still a lot of development being done in operation practice, and also in environmentally-friendly data center design. Data centers are typically very expensive to build and maintain. For instance, Amazon.com's new 116,000 sq ft (10,800 m2) data center in Oregon is expected to cost up to $100 million.[1]
[edit]Requirements for modern data centers



Racks of telecommunications equipment in part of a data center.
IT operations are a crucial aspect of most organizational operations. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of both fiber optic cables and power, which includes emergency backup power generation.
Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:
Operate and manage a carrier’s telecommunication network
Provide data center based applications directly to the carrier’s customers
Provide hosted applications for a third party to provide services to their customers
Provide a combination of these and similar data center applications.
Effective data center operation requires a balanced investment in both the facility and the housed equipment. The first step is to establish a baseline facility environment suitable for equipment installation. Standardization and modularity can yield savings and efficiencies in the design and construction of telecommunications data centers.
Standardization means integrated building and equipment engineering. Modularity has the benefits of scalability and easier growth, even when planning forecasts are less than optimal. For these reasons, telecommunications data centers should be planned in repetitive building blocks of equipment, and associated power and support (conditioning) equipment when practical. The use of dedicated centralized systems requires more accurate forecasts of future needs to prevent expensive over construction, or perhaps worse — under construction that fails to meet future needs.
The "lights-out" data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.[2][3]
There is a trend to modernize data centers in order to take advantage of the performance and energy efficiency increases of newer IT equipment and capabilities, such as cloud computing. This process is also known as data center transformation.[4]
Organizations are experiencing rapid IT growth but their data centers are aging. Industry research company International Data Corporation (IDC) puts the average age of a data center at nine-years-old.[5] Gartner, another research company says data centers older than seven years are obsolete.[6]
In May 2011, data center research organization Uptime Institute, reported that 36 percent of the large companies it surveyed expect to exhaust IT capacity within the next 18 months.[7]
Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach.[8] The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.
Standardization/consolidation: The purpose of this project is to reduce the number of data centers a large organization may have. This project also helps to reduce the number of hardware, software platforms, tools and processes within a data center. Organizations replace aging data center equipment with newer ones that provide increased capacity and performance. Computing, networking and management platforms are standardized so they are easier to manage.[9]
Virtualize: There is a trend to use IT virtualization technologies to replace or consolidate multiple data center equipment, such as servers. virtualization helps to lower capital and operational expenses,[10] and reduce energy consumption.[11] Data released by investment bank Lazard Capital Markets reports that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.[12]
Automating: Data center automation involves automating tasks such as provisioning, configuration, patching, release management and compliance. As enterprises suffer from few skilled IT workers,[13] automating tasks make data centers run more efficiently.
Securing: In modern data centers, the security of data on virtual systems is integrated with existing security of physical infrastructures.[14] The security of a modern data center must take into account physical security, network security, and data and user security.
[edit]Data center classification

The Telecommunications Industry Association is a trade association (about 600 members) accredited by ANSI (American National Standards Institute). In 2005 it published ANSI/TIA-942, Telecommunications Infrastructure Standard for Data Centers, which defined four levels (called tiers) of data centers in a thorough, quantifiable manner. TIA-942 was amended in 2008 and again in 2010.TIA-942:Data Center Standards Overview describes the requirements for the data center infrastructure. The simplest is a Tier 1 data center, which is basically a server room, following basic guidelines for the installation of computer systems. The most stringent level is a Tier 4 data center, which is designed to host mission critical computer systems, with fully redundant subsystems and compartmentalized security zones controlled by biometric access controls methods. Another consideration is the placement of the data center in a subterranean context, for data security as well as environmental considerations such as cooling requirements.[15]
The German Datacenter star audit programme uses an auditing process to certify 5 levels of "gratification" that affect Data Center criticality.
Independent from the ANSI/TIA-942 standard, the Uptime Institute, a think tank and professional-services organization based in Santa Fe, New Mexico, has defined its own four levels on which it holds a copyright. The levels describe the availability of data from the hardware at a location. The higher the tier, the greater the availability. The levels are: [16] [17] [18]
Tier Level Requirements
1
Single non-redundant distribution path serving the IT equipment
Non-redundant capacity components
Basic site infrastructure guaranteeing 99.671% availability
2
Fulfills all Tier 1 requirements
Redundant site infrastructure capacity components guaranteeing 99.741% availability
3
Fulfills all Tier 1 and Tier 2 requirements
Multiple independent distribution paths serving the IT equipment
All IT equipment must be dual-powered and fully compatible with the topology of a site's architecture
Concurrently maintainable site infrastructure guaranteeing 99.982% availability
4
Fulfills all Tier 1, Tier 2 and Tier 3 requirements
All cooling equipment is independently dual-powered, including chillers and heating, ventilating and air-conditioning (HVAC) systems
Fault-tolerant site infrastructure with electrical power storage and distribution facilities guaranteeing 99.995% availability
[edit]Design considerations



A typical server rack, commonly seen in colocation.
A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers to large freestanding storage silos which occupy many tiles on the floor. Some equipment such as mainframe computers and storage devices are often as big as the racks themselves, and are placed alongside them. Very large data centers may use shipping containers packed with 1,000 or more servers each;[19] when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers).[20]
Local building codes may govern the minimum ceiling heights.


A bank of batteries in a large data center, used to provide power until diesel generators can start.
[edit]Environmental control
Main article: Data center environmental control
The physical environment of a data center is rigorously controlled. Air conditioning is used to control the temperature and humidity in the data center. ASHRAE's "Thermal Guidelines for Data Processing Environments"[21] recommends a temperature range of 16–24 °C (61–75 °F) and humidity range of 40–55% with a maximum dew point of 15°C as optimal for data center conditions.[22] The temperature in a data center will naturally rise because the electrical power used heats the air. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help control humidity by cooling the return space air below the dew point. Too much humidity, and water may begin to condense on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapor if the humidity is too low, which can result in static electricity discharge problems which may damage components. Subterranean data centers may keep computer equipment cool while expending less energy than conventional designs.
Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool.[23] Many data centers now cool all of the servers using outside air. They do not use chillers/air conditioners, which creates potential energy savings in the millions.[24]
Telcordia GR-2930, NEBS: Raised Floor Generic Requirements for Network and Data Centers, presents generic engineering requirements for raised floors that fall within the strict NEBS guidelines.
There are many types of commercially available floors that offer a wide range of structural strength and loading capabilities, depending on component construction and the materials used. The general types of raised floors include stringerless, stringered, and structural platforms, all of which are discussed in detail in GR-2930 and summarized below.
Stringerless Raised Floors - One non-earthquake type of raised floor generally consists of an array of pedestals that provide the necessary height for routing cables and also serve to support each corner of the floor panels. With this type of floor, there may or may not be provisioning to mechanically fasten the floor panels to the pedestals. This stringerless type of system (having no mechanical attachments between the pedestal heads) provides maximum accessibility to the space under the floor. However, stringerless floors are significantly weaker than stringered raised floors in supporting lateral loads and are not recommended.
Stringered Raised Floors - This type of raised floor generally consists of a vertical array of steel pedestal assemblies (each assembly is made up of a steel base plate, tubular upright, and a head) uniformly spaced on two-foot centers and mechanically fastened to the concrete floor. The steel pedestal head has a stud that is inserted into the pedestal upright and the overall height is adjustable with a leveling nut on the welded stud of the pedestal head.
Structural Platforms - One type of structural platform consists of members constructed of steel angles or channels that are welded or bolted together to form an integrated platform for supporting equipment. This design permits equipment to be fastened directly to the platform without the need for toggle bars or supplemental bracing. Structural platforms may or may not contain panels or stringers.
[edit]Metal Whiskers
Raised floors and other metal structures such as cable trays and ventilation ducts have caused many problems with zinc whiskers in the past, and likely are still present in many data centers. This happens when microscopic metallic filaments form on metals such as zinc or tin that protect many metal structures and electronic components from corrosion. Maintenance on a raised floor or installing of cable etc can dislodge the whiskers, which enter the airflow and may short circuit server components or power supplies, sometimes through a high current metal vapor plasma arc. This phenomenon is not unique to data centers, and has also caused catastrophic failures of satellites and military hardware.[25]
[edit]Electrical power
Backup power consists of one or more uninterruptible power supplies, battery banks, and/or diesel generators.[26]
To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
Data centers typically have raised flooring made up of 60 cm (2 ft) removable square tiles. The trend is towards 80–100 cm (31–39 in) void to cater for better and uniform air distribution. These provide a plenum for air to circulate below the floor, as part of the air conditioning system, as well as providing space for power cabling.
Some new data centres and technology demonstrations are beginning to standardise a 380VDC power distribution network that is expected to improve efficiency of building power systems.[27][28] Since much of the power loss in electrical systems is caused by voltage and AC/DC conversion, an all DC network supplying low voltage (LV) power close to equipment loads is expected to achieve significant savings in power usage and cooling loads.[29] 380VDC power supply requires DC connectors that prevent them being used on AC equipment, with possible standardisation likely to use connectors such as the Anderson Powerpole® Pak.[30]
[edit]Low-voltage cable routing
Data cabling is typically routed through overhead cable trays in modern data centers. But some are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Computer cabinets are often organized into a hot aisle arrangement to maximize airflow efficiency.
[edit]Fire protection
Data centers feature fire protection systems, including passive and active design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a developing fire by detecting particles generated by smoldering components prior to the development of flame. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. A fire sprinkler system is often provided to control a full scale fire if it develops. Fire sprinklers require 18 in (46 cm) of clearance (free of cable trays, etc.) below the sprinklers. Clean agent fire suppression gaseous systems are sometimes installed to suppress a fire earlier than the fire sprinkler system. Passive fire protection elements include the installation of fire walls around the data center, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems, or if they are not installed. For critical facilities these firewalls are often insufficient to protect heat-sensitive electronic equipment, however, because conventional firewall construction is only rated for flame penetration time, not heat penetration. There are also deficiencies in the protection of vulnerable entry points into the server room, such as cable penetrations, coolant line penetrations and air ducts. For mission critical data centers fireproof vaults with a Class 125 rating are necessary to meet NFPA 75[31] standards.
[edit]Security
Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel, with controls including bollards and mantraps.[32] Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition man traps is starting to be commonplace.
[edit]Energy use



Google Data Center, The Dalles
Main article: IT energy management
Energy use is a central issue for data centers. Power draw for data centers ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building.[33] For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center.[34] By 2012 the cost of power for the data center is expected to exceed the cost of the original capital investment.[35]
[edit]Greenhouse gas emissions
In 2007 the entire information and communication technologies or ICT sector was estimated to be responsible for roughly 2% of global carbon emissions with data centers accounting for 14% of the ICT footprint.[36] The US EPA estimates that servers and data centers are responsible for up to 1.5% of the total US electricity consumption,[37] or roughly .5% of US GHG emissions,[38] for 2007. Given a business as usual scenario greenhouse gas emissions from data centers is projected to more than double from 2007 levels by 2020.[36]
Siting is one of the factors that affect the energy consumption and environmental effects of a datacenter. In areas where climate favors cooling and lots of renewable electricity is available the environmental effects will be more moderate. Thus countries with favorable conditions, such as: Canada,[39] Finland,[40] Sweden[41] and Switzerland,[42] are trying to attract cloud computing data centers.
In an 18-month investigation by scholars at Rice University’s Baker Institute for Public Policy in Houston and the Institute for Sustainable and Applied Infodynamics in Singapore, data center-related emissions will more than triple by 2020. [43]
[edit]Energy efficiency
The most commonly used metric to determine the energy efficiency of a data center is power usage effectiveness, or PUE. This simple ratio is the total power entering the data center divided by the power used by the IT equipment.

Power used by support equipment, often referred to as overhead load, mainly consists of cooling systems, power delivery, and other facility infrastructure like lighting. The average data center in the US has a PUE of 2.0,[37] meaning that the facility uses one Watt of overhead power for every Watt delivered to IT equipment. State-of-the-art data center energy efficiency is estimated to be roughly 1.2.[44] Some large data center operators like Microsoft and Yahoo! have published projections of PUE for facilities in development; Google publishes quarterly actual efficiency performance from data centers in operation.[45]
The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile of energy efficiency of all reported facilities.[46]
European Union also has a similar initiative: EU Code of Conduct for Data Centres[47]
[edit]Network infrastructure



An example of "rack mounted" servers.
Communications in data centers today are most often based on networks running the IP protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world. Redundancy of the Internet connection is often provided by using two or more upstream service providers (see Multihoming).
Some of the servers at the data center are used for running the basic Internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers.
Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, etc. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center.
[edit]Data Center Infrastructure Management

Data center infrastructure management (DCIM) is the integration of information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center's critical systems. Achieved through the implementation of specialized software, hardware and sensors, DCIM enables common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures.
Depending on the type of implementation, DCIM products can help data center managers identify and eliminate sources of risk to increase availability of critical IT systems. DCIM products also can be used to identify interdependencies between facility and IT infrastructures to alert the facility manager to gaps in system redundancy, and provide dynamic, holistic benchmarks on power consumption and efficiency to measure the effectiveness of “green IT” initiatives.
[edit]Applications



A 40-foot Portable Modular Data Center.
The main purpose of a data center is running the applications that handle the core business and operational data of the organization. Such systems may be proprietary and developed internally by the organization, or bought from enterprise software vendors. Such common applications are ERP and CRM systems.
A data center may be concerned with just operations architecture or it may provide other services as well.
Often these applications will be composed of multiple hosts, each running a single component. Common components of such applications are databases, file servers, application servers, middleware, and various others.
Data centers are also used for off site backups. Companies may subscribe to backup services provided by a data center. This is often used in conjunction with backup tapes. Backups can be taken of servers locally on to tapes. However, tapes stored on site pose a security threat and are also susceptible to fire and flooding. Larger companies may also send their backups off site for added security. This can be done by backing up to a data center. Encrypted backups can be sent over the Internet to another data center where they can be stored securely.
For quick deployment or disaster recovery, several large hardware vendors have developed mobile solutions that can be installed and made operational in very short time. Companies such as Cisco Systems,[48] Sun Microsystems (Sun Modular Datacenter),[49][50] Bull, [51] IBM (Portable Modular Data Center), HP, and Google (Google Modular Data center) have developed systems that could be used for this purpose.[52]

Saturday, September 24, 2011

Mathematical finance

Heatequation exampleBImage via Wikipedia
Mathematical finance is a field of applied mathematics, concerned with financial markets. The subject has a close relationship with the discipline of financial economics, which is concerned with much of the underlying theory. Generally, mathematical finance will derive and extend the mathematical or numerical models suggested by financial economics. Thus, for example, while a financial economist might study the structural reasons why a company may have a certain share price, a financial mathematician may take the share price as a given, and attempt to use stochastic calculus to obtain the fair value of derivatives of the stock (see: Valuation of options).
In terms of practice, mathematical finance also overlaps heavily with the field of computational finance (also known as financial engineering). Arguably, these are largely synonymous, although the latter focuses on application, while the former focuses on modeling and derivation (see: Quantitative analyst). The fundamental theorem of arbitrage-free pricing is one of the key theorems in mathematical finance. Many universities around the world now offer degree and research programs in mathematical finance; see Master of Mathematical Finance.
Contents [hide]
1 History: Q versus P
1.1 Derivatives pricing: the Q world
1.2 Risk and portfolio management: the P world
2 Criticism
3 Mathematical finance articles
3.1 Mathematical tools
3.2 Derivatives pricing
4 See also
5 Notes
6 References
7 External links
[edit]History: Q versus P

There exist two separate branches of finance that require advanced quantitative techniques: derivatives pricing on the one hand, and risk and portfolio management on the other hand. One of the main differences is that they use different probabilities, namely the risk-neutral probability, denoted by "Q", and the actual probability, denoted by "P".
[edit]Derivatives pricing: the Q world
The goal of derivatives pricing is to determine the fair price of a given security in terms of more liquid securities whose price is determined by the law of supply and demand. Examples of securities being priced are plain vanilla and exotic options, convertible bonds, etc. Once a fair price has been determined, the sell-side trader can make a market on the security. Therefore, derivatives pricing is a complex "extrapolation" exercise to define the current market value of a security, which is then used by the sell-side community.
Derivatives pricing: the Q world
Goal "extrapolate the present"
Environment risk-neutral probability
Processes continuous-time martingales
Dimension low
Tools Ito calculus, PDE’s
Challenges calibration
Business sell-side
Quantitative derivatives pricing was initiated by Louis Bachelier in The Theory of Speculation (published 1900), with the introduction of the most basic and most influential of processes, the Brownian motion, and its applications to the pricing of options. However, Bachelier's work hardly caught any attention outside academia.
Main article: Black–Scholes
The theory remained dormant until Fischer Black and Myron Scholes, along with fundamental contributions by Robert C. Merton, applied the second most influential process, the geometric Brownian motion, to option pricing. For this M. Scholes and R. Merton were awarded the 1997 Nobel Memorial Prize in Economic Sciences. Black was ineligible for the prize because of his death in 1995.
The next important step was the fundamental theorem of asset pricing by Harrison and Pliska (1981), according to which the suitably normalized current price P0 of a security is arbitrage-free, and thus truly fair, only if there exists a stochastic process Pt with constant expected value which describes its future evolution:





(1 )
A process satisfying (1) is called a "martingale". A martingale does not reward risk. Thus the probability of the normalized security price process is called "risk-neutral" and is typically denoted by the blackboard font letter "".
The relationship (1) must hold for all times t: therefore the processes used for derivatives pricing are naturally set in continuous time.
The quants who operate in the Q world of derivatives pricing are specialists with deep knowledge of the specific products they model.
Securities are priced individually, and thus the problems in the Q world are low-dimensional in nature. Calibration is one of the main challenges of the Q world: once a continuous-time parametric process has been calibrated to a set of traded securities through a relationship such as (1), a similar relationship is used to define the price of new derivatives.
The main quantitative tools necessary to handle continuous-time Q-processes are Ito’s stochastic calculus and partial differential equations (PDE’s).
[edit]Risk and portfolio management: the P world
Risk and portfolio management aims at modelling the probability distribution of the market prices of all the securities at a given future investment horizon.
This "real" probability distribution of the market prices is typically denoted by the blackboard font letter "", as opposed to the "risk-neutral" probability "" used in derivatives pricing.
Based on the P distribution, the buy-side community takes decisions on which securities to purchase in order to improve the prospective profit-and-loss profile of their positions considered as a portfolio.

Enhanced by Zemanta

Relative currency strength

Stock market of BrusselsImage via Wikipedia
The Relative currency strength (RCS) is a technical indicator used in the technical analysis of forex market. It is intended to chart the current and historical strength or weakness of a currency based on the closing prices of a recent trading period.It's based on Relative Strength Index and mathematical decorrelation of 28 cross currency pairs.It shows relative strength momentum of selected major currency. (EUR, GBP, AUD, NZD, USD, CAD, CHF, JPY)
The RCS is typically used on a 14*period timeframe, measured on a scale from 0 to 100 like RSI, with high and low levels marked at 70 and 30, respectively. Shorter or longer timeframes are used for alternately shorter or longer outlooks. More extreme high and low levels—80 and 20, or 90 and 10—occur less frequently but indicate stronger momentum of currency.
Combination of Relative currency strength and Absolute currency strength indicators gives you entry and exit signals for currency trading.
Contents [hide]
1 Basic idea
2 Signals
3 Indicator
4 Advantageous for trading strategies
5 See also
6 References
7 External links
[edit]Basic idea

Indicator basic idea is "buy strong currency and sell weak currency".
If X/Y currency pair is in an uptrend, it shows you if it's due to X strength or Y weakness.
On these signals you can choose the most worth pair to trade.
[edit]Signals

You can use Relative currency strength for pattern trading as well, among basic patterns which can be used are: cross, trend break, trend follow,divergencies divergencies.

Cross



Trend-break



Divergence

[edit]Indicator


Combination of Relative currency strength and Absolute currency strength



Absolute currency strength

Advantageous for trading strategies

Most commonly used as combination with Absolute currency strength
information indicator to realize which currencies are being demanded, this is ideal indicator for trend follow traders
help for scalpers looking for strength trend (trader can see both absolute and relative strength)
instrument for correlation/spread traders to see reactions of each currencies on moves in correlated instruments (for example CAD/OIL or AUD/GOLD)

Enhanced by Zemanta

Currency strength

Image used to convey the idea of currency conv...Image via Wikipedia
Currency strength expresses the value of currency. For economists, it is often calculated as purchasing power,[1] while for financial traders, it can be described as an indicator, reflecting many factors related to the currency; for example, fundamental data, overall economic performance or interest rates.[2] It can also be calculated from currency in relation to other currencies, usually using a pre-defined currency basket. A typical example of this method is the U.S. Dollar Index. The current trend in currency strength indicators is to combine more currency indexes in order to make forex movements easily visible. For the calculation of these kind of indexes, major currencies are usually used because they represent up to 90% of the whole forex market volume.[3]
Contents [hide]
1 Currency strength based trading indicators
1.1 Examples
2 See also
3 References
4 External links
[edit]Currency strength based trading indicators

Currency strength is calculated from the U.S. Dollar Index, which is used as a reference for other currency indexes.[4]
The basic idea behind indicators is "to buy strong currency and to sell weak currency".
If is X/Y currency pair is up trend, you are able to determine whether this happens due to X's strength or Y's weakness.[5]
With these kind of indicators one is able to choose the most valuable pair to trade; see the reactions of each currency on moves in correlated instruments (for example CAD/OIL or AUD/GOLD); look for a strong trend in one currency; and observe most of the forex market in one chart.
[edit]Examples
Typical examples of indicators based on currency strength are relative currency strength and absolute currency strength. Their combination is called the "Forex Flow indicator", because you are able to see the whole currency flow across the forex market.


Enhanced by Zemanta

Forex Swing Trading Strategy Explained


Forex swing trading is one of my favourite trading method as it happens so frequently which gives all traders a lot of opportunity to trade it.
However there are times where the swings are more vigorous and this is when you can make more money from. Typically the forex market moves in waves and these waves are what is known as swings. You may be thinking that there are so many swings in a chart and is it possible to trade them all.
The answer is NO. If you take a close look at the swings, you will find that most of them do not move by a lot of pips. Therefore today I will be revealing to you the time that I often trade forex swing and it is also the time where there are bigger movement in price which makes it more profitable to trade.
small swing
big swing
First of all, let me go through the definition of swings for those of you who are new in this field. Basically a swing is made up of a V or N shape and it is actually formed by a reversal or retracement in price movement.
V-Shaped Swing
N-Shaped Swing
The best time to trade forex swing is during London Open and New York Open as these are the times that have the most violent swing.
Forex Indicators Required To Trade Forex Swing:
Here are How You Can Trade Forex Swing:
1) Time To Do Technical Analysis: As the swing often occurs at London Open or New York Open, you should be doing your technical analysis 1 hour before the opening time. This can gives you ample time to analyze the time and figure out all the major supports and resistances.
2) Trend Line: To trade forex swing, you should be waiting for a trend line break to confirm the reversal or retracement of the price which makes up the swing. Take note that you should never enter your trade before a trend line break occur as you may be stopped out of your position if the price did not break the line but end up being repel by it.
3) Verify The Break: There are times where you will experience the price breaking through the trend line and move back in within the next candle and this is what traders call “Fake out” and this can usually be minimised with the help of MACD.
All you have to do when you see the price breaking out of the trend line, you should than check the MACD histogram to see if it flips to the other side. If it did not, there is a high chance that you are seeing a fake out in action.
4) Check Your Oscillator: This is the last step to check before you enter your trade. If you are looking to go LONG, you should check the oscillator to see if there are oversold and if you are looking to go SHORT, you should see if the oscillator is overbought. This can gives you additional chance of having a winning trade.
Real Swing
The above are how I trade forex swing and you can try them out to see if it works for you as well.
You can check out my other posts that show you how I trade forex breakout strategy as well as my forex scalping system.
In case you are interested to learn more about the forex swing strategy, this is one place you can learn how to trade the swing strategy effectively. In fact, I have purchased the course before and find it very effective.Click here to find out more

Enhanced by Zemanta

Developing Your Own Trading Plan

Now that you're about half way through college, here's one piece of advice you should always remember.

Be your own trader.

Don't follow someone else's trading advice blindly. Just because someone may be doing well with their method, it doesn't mean it will work for you. We all have different market views, thought processes, risk tolerance levels, and market experience.

Have your own personalized trading plan and update it as you learn from the market.



With rock solid discipline, your trading could look like this.

Developing a Trading Plan and sticking to it are the two main ingredients of trading discipline.

But trading discipline isn't enough.

Even solid trading discipline isn't enough.

It has to be rock solid discipline.

We repeat: rock solid. Like Jacob Black's abs.

Plastic solid discipline won't do. Nor will discipline made from straws and sticks.

We don't want to be little piggies. We want to be successful traders!

And having rock solid trading discipline is the most important characteristic of successful traders.

A trading plan defines what is supposed to be done, why, when, and how. It covers your trader personality, personal expectations, risk management rules, and trading system(s).

When followed to, a trading plan will help limit trading mistakes and minimize your losses. After all, "if you fail to plan, then you've already planned to fail."





Enhanced by Zemanta

Using Equities to Trade FX

Did you know that equity markets can also be used to help gauge currency movement? In a way, you can use the equity indices as some kind of a forex crystal ball.

Based on what you see on the television, what you hear on the radio, and what you read in the newspaper, it seems that the stock (equity) market is the most closely covered financial market. It's definitely exciting to trade since you can buy the companies that make the products you can't live without.



One thing to remember is that in order to purchase stocks from a particular country, you must first have the local currency.

To invest in stocks in the Japan, a European investor must first exchange his euros (EUR) into Japanese yen (JPY). This increased demand for JPY causes the value of the JPY to appreciate. On the other hand, selling euros increases its supply, which drives the euro's value lower.

When the outlook for a certain stock market is looking good, international money flows in. On the other hand, when the stock market is struggling, international investors take their money out and look for a better place to park their funds.

Even though you may not trade stocks, as a forex trader, you should still pay attention to the stock markets in major countries.

If the stock market in one country starts performing better than the stock market in another country, you should be aware that money will probably be moving from the country with the weaker stock market to the country with the stronger stock market.

This could lead to a rise in value of the currency for the country with the stronger stock market, while the value of the currency could depreciate for the country with the weaker stock market. The general idea is: strong stock market, strong currency; weak stock market, weak currency.

If you bought the currency from the country with the stronger stock market and sold the currency from the country with the weaker stock market, you can potentially make some nice dough.



Enhanced by Zemanta

Carry Trade

Did you know there is a trading system that can make money if price stayed exactly the same for long periods of time?

Well there is and it's one the most popular ways of making money by many of the biggest and baddest money manager mamajamas in the financial universe!

It's called the "Carry Trade".



A carry trade involves borrowing or selling a financial instrument with a low interest rate, then using it to purchase a financial instrument with a higher interest rate.

While you are paying the low interest rate on the financial instrument you borrowed/sold, you are collecting higher interest on the financial instrument you purchased. Thus your profit is the money you collect from the interest rate differential.

For example:

Let's say you go to a bank and borrow $10,000. Their lending fee is 1% of the $10,000 every year.

With that borrowed money, you turn around and purchase a $10,000 bond that pays 5% a year.

What's your profit?

Anyone?

You got it! It's 4% a year! The difference between interest rates!





By now you're probably thinking, "That doesn't sound as exciting or profitable as catching swings in the market."

However, when you apply it to the spot forex market, with its higher leverage and daily interest payments, sitting back and watching your account grow daily can get pretty sexy.

To give you an idea, a 3% interest rate differential becomes 60% annual interest a year on an account that is 20 times leveraged!

In this section, we will discuss how carry trades work, when they will work, and when they will NOT work.

We will also tackle risk aversion (WTH is that?!? Don't worry, like we said, we'll be talking more about it later).





Enhanced by Zemanta

Foreign exchange autotrading

Image used to convey the idea of currency conv...Image via Wikipedia
Forex autotrading is a trading strategy where buy and sell orders are placed automatically based on an underlying system or program on the foreign exchange market. The buy or sell orders are sent out to be executed in the market when a certain set of criteria is met.
Autotrading systems, or programs to form buy and sell signals, are used typically by active traders who enter and exit positions more frequently than the average investor. The autotrading criteria differ greatly, however they are mostly based on technical analysis.[1]
Contents [hide]
1 History
2 Types
2.1 Advantages
2.2 Disadvantages
3 See also
4 References
[edit]History

Forex autotrading originates at the emergence of online retail trading, since about 1999 when internet-based companies created retail forex platforms that provide a quick way for individuals to buy and sell on the forex spot market. Nevertheless, larger retail traders could autotrade Forex contracts at the Chicago Mercantile Exchange as early as in the 1970s.
[edit]Types

There are two major types of Forex autotrading:
Fully automated or robotic Forex trading: This is very similar to algorithmic trading or black-box trading, where a computer algorithm decides on aspects of the order such as the timing, price or quantity and initiates the order automatically. Users can only interfere by tweaking the technical parameters of the program; all other control is handed over to the program.[citation needed]
Signal-based Forex autotrading: This autotrading mode is based on manually executing orders generated by a trading system. For example a typical approach is to use a service where traders all over the world making their strategies available to anyone interested in the form of signals. Traders may choose to manually execute any of these signals in their own broker accounts.[citation needed]
[edit]Advantages
An automated trading environment can generate more trades per market than a human trader can handle and can replicate its actions across multiple markets and timeframes. An automated system is also unaffected by the psychological swings that human traders are prey to. This is particularly relevant when trading with a mechanical model, which is typically developed on the assumption that all the trade entries flagged will actually be taken in real time trading.[2]
Signal Provider based models offer traders the opportunity to follow previously successful signal providers or strategies with the hope that the advice they offer will continue to be accurate and lead to profitable future trades. Traders do not need to have expert knowledge or ability to define their own strategies and instead can select a system based on its performance to date, making Forex trading accessible to a large number of people.
[edit]Disadvantages
As a decentralized and relatively unregulated market, it is extremely attractive to a number of Forex scams. Forex autotrading, as it brings Forex trading to the masses makes even more people susceptible to frauds. Bodies such as the National Futures Association and the U.S. Securities and Exchange Commission have issued warnings and rules to avoid fraudulent Forex trading behavior.[3]

Enhanced by Zemanta