Monday, 20 January 2014

NET NEUTRALITY

What is net neutrality?
Net neutrality is an idea derived from how telephone lines have worked since the beginning of the 20th century. In case of a telephone line, you can dial any number and connect to it. It does not matter if you are calling from operator A to operator B. It doesn't matter if you are calling a restaurant or a drug dealer. The operators neither block the access to a number nor deliberately delay connection to a particular number, unless forced by the law. Most of the countries have rules that ask telecom operators to provide an unfiltered and unrestricted phone service.

When the internet started to take off in 1980s and 1990s, there were no specific rules that asked that internet service providers (ISPs) should follow the same principle. But, mostly because telecom operators were also ISPs, they adhered to the same principle. This principle is known as net neutrality. An ISP does not control the traffic that passes its servers. When a web user connects to a website or web service, he or she gets the same speed. Data rate for Youtube videos and Facebook photos is theoretically same. Users can access any legal website or web service without any interference from an ISP.

How did net neutrality shape the internet?

Net neutrality has shaped the internet in two fundamental ways.

One, web users are free to connect to whatever website or service they want. ISPs do not bother with what kind of content is flowing from their servers. This has allowed the internet to grow into a truly global network and has allowed people to freely express themselves. For example, you can criticize your ISP on a blog post and the ISP will not restrict access to that post for its other subscribers even though the post may harm its business.

But more importantly, net neutrality has enabled a level playing field on the internet. To start a website, you don't need lot of money or connections. Just host your website and you are good to go. If your service is good, it will find favour with web users. Unlike the cable TV where you have to forge alliances with cable connection providers to make sure that your channel reaches viewers, on internet you don't have to talk to ISPs to put your website online. This has led to creation Google, Facebook, Twitter and countless other services. All of these services had very humble beginnings. They started as a basic websites with modest resources. But they succeeded because net neutrality allowed web users to access these websites in an easy and unhindered way.

What will happen if there is no net neutrality? 

If there is no net neutrality, ISPs will have the power (and inclination) to shape internet traffic so that they can derive extra benefit from it. For example, several ISPs believe that they should be allowed to charge companies for services like YouTube and Netflix because these services consume more bandwidth compared to a normal website. Basically, these ISPs want a share in the money that YouTube or Netflix make. 

Without net neutrality, the internet as we know it will not exist. Instead of free access, there could be "package plans" for consumers. For example, if you pay Rs 500, you will only be able to access websites based in India. To access international websites, you may have to pay a more. Or maybe there can be different connection speed for different type of content, depending on how much you are paying for the service and what "add-on package" you have bought. 

Lack of net neutrality, will also spell doom for innovation on the web. It is possible that ISPs will charge web companies to enable faster access to their websites. Those who don't pay may see that their websites will open slowly. This means bigger companies like Google will be able to pay more to make access to Youtube or Google+ faster for web users but a startup that wants to create a different and better video hosting site may not be able to do that. 

Will the concept of net neutrality survive?

Net neutrality is sort of gentlemen's agreement. It has survived so far because few people realized the potential of internet when it took off around 30 years ago. But now when the internet is an integral part of the society and incredibly important, ISPs across the world are trying to get the power to shape and control the traffic. But there are ways to keep net neutrality alive. 

Consumers should demand that ISPs continue their hands-off approach from the internet traffic. If consumers see a violation of net neutrality, they ought to take a proactive approach and register their displeasure with the ISP. They should also reward ISPs that uphold the net neutrality

Monday, 2 December 2013

MOBILE MALWARE

Mobile malware has emerged as a real and significant problem. Addressing it is no longer optional. As with other IT security risks, technology isn’t a silver bullet, but it is a key component of a holistic solution that also incorporates people and process.

A mobile virus is malicious software that targets mobile phones or wireless-enabled PDAs,thereby may causing the collapse of system and loss or leak of confidential information.The insidious objectives of mobile malware range from spying to keylogging, from text messaging to phishing, from unwanted marketing to outright fraud.

Fifty-nine percent of IT and security professionals surveyed by the Ponemon Institute recently said mobile devices are increasing the prevalence of malware infections within their organizations. This is no shock: the extraordinary growth of mobile platforms has madethem an irresistible target. The only surprise would have been if these devices had escaped attack.

Years ago, PC malware exploded when Windows achieved dominance. Something similar
is occurring with mobile. As the mobile marketplace has grown and evolved, the Android platform has become dominant. Worldwide, 70% of new smartphones now run Android, with iOS running a distant second. (Microsoft’s Windows Phone 8 platform offers promise, but hasn’t yet achieved significant market penetration.)

The Android platform’s openness has made it attractive to users, device manufacturers,carriers, app developers and to malware creators. That’s where they’re focused..

In BYOD arrangements, mobile devices are often owned by users, who act as defacto administrators. Users typically decide which apps to run, and where to get them.Wider smartphone and tablet usage is often correlated with a loss of organizational control.And that, in turn, can compromise security in multiple ways. This is why some organizations are pursuing choose your own device (CYOD) approaches, where users get to pick their devices from a list the company is prepared to support, will continue to own, and plans to centrally administer. Of course, CYOD isn’t always an option, and many organizations have chosen to accept the tradeoffs associated with full BYOD.

Mobile malware risks
Organizations evaluating mobile malware risks should assess each of the ways it can damage them, including the following.

Productivity losses: Some forms of malware inconvenience users through aggressive advertising, prevent mobile devices from working properly, and increase support costs.

Direct costs: Some forms of malware and potentially unwanted applications (PUAs) have direct costs by utilizing paid mobile services such as SMS, with or without the user’s awareness or understanding.

Security, privacy, and compliance risks: Mobile malware can compromise corporate and customer data, systems, and assets that must be protected—placing the organization at competitive, reputational and legal risk.

Some mobile malware and PUAs merely annoy and frustrate. Yet as a whole, mobile malware and PUAs represent a significant and growing problem.

Sunday, 3 November 2013

GREEN COMPUTING

Driven by rising electricity costs, green legislation and corporate social responsibility, green IT is increasingly on many IT professionals’ minds, particularly for the power-hungry data centre. Whatever the reasons, experts say that in the long run, having an energy-efficient data centre helps the environment and also saves businesses money.

Technologies that can help data-centre become green:

Data centre infrastructure management:
Experts rate data centre infrastructure management (Dcim) tools as one of the coolest technologies that can help companies make their infrastructure energy-efficient and green. Until 2009, Dcim had virtually no market penetration, but today it is one of the most significant areas of green computing. Dcim brings together standalone functions such as data centre design, asset discovery, systems management functions, capacity planning and energy management to provide a holistic view of the data centre, ranging from the rack or cabinet level to the cooling infrastructure and energy utilisation. it helps encourage the efficient use of energy, optimise equipment layouts, support virtualisation and consolidation, and improve data centre availability.

Free air cooling
Data centre power use is high on the agenda for most data centre developers. Energy costs have become the largest single element in the data centre’s total cost of ownership (tco) – ranging from 20% to 60% depending on the facility’s business model and as energy prices (and/or taxes) rise, the share of the total cost will only become larger. Free or natural air cooling is the practice of using outside air to cool data-centre facilities rather than running power-hungry mechanical refrigeration or air-conditioning units.

Low-power servers
Data centre operators are looking for more efficient alternatives to the current x86 standard server racks and blades to make their infrastructure sustainable in the long term. On-site wind generation or use of renewable energy.A number of large businesses, including Apple, Facebook and Google, are taking initiatives to power their data centres using wind energy.

Data centre consolidation and virtualisation
Virtualisation and data centre consolidation strategies help enterprises streamline it resources and utilise the untapped processing power of high-power server and storage devices. The combination of virtualisation, low-latency and high-bandwidth network  connectivity and specialised servers has the potential to slash data centre capital costs and improve energy efficiency.

Cloud computing
Cloud computing can help enterprises in their green it efforts, since a computing cloud offers higher CPU utilisation.

Energy-efficient cooling in the data centre
Many data centres are being run against old-style environmental designs, where the approach to cooling is based around ensuring that input cooling air is at such a low temperature that outlet air does not exceed a set temperature in many cases, the aim has been to keep the average volumetric temperature in the data centre around 20°c or lower with some running at between 15°c and 17°c.

The other technologies that can help data centre become green are:

Optimising airflow for maximum cooling
Increasing a data centre’s thermal envelope

Wednesday, 2 October 2013

BUSINESS INTELLIGENCE

Business Intelligence
In the corporate sector, there is widespread need to use a range of software with different databases. We are generating a lot of data every day from software, web services etc. This data is useless if we are not able to draw insight from it. This is where BI comes into picture. BI can connect to different databases, web services and can collect all the data. On this data, it can do analysis and can provide a lot of insight. It can provide different kinds of reports, dashboards, data visualization, what if analysis etc and can help the management to make decisions which are based on data rather than on plain intuition.

For example in energy sector, a lot of data is collected from different smart devices, DISCOM, meters etc, and by properly analysing this data we can get a lot of insight .We can manage electricity usage better and tap data to realize the risks of theft and loss of energy.

BI Basics
There are a number of companies which provide BI software like SAS, Microsoft, IBM, SAP, Pentaho, Jaspersoft. The BI software can be used in any sector and often system integrators or software companies then provide services and produce a sector specific solution for their end clients.

Installation & Security:
Once a sector specific solution has been developed, the solution can be then integrated with any software, website, portal or application. Hence, the software does not really depend on the platform; via web services the solution can be integrated with any platform. Also, user access-based data security can be provided. Hence, a user will be able to view only that data which is relevant to him.

Limitations
The biggest drawback of proprietary software provided by SAS, SAP etc, is in terms of costs. Their license costs go up in crores of rupees. On the other hand, we do have open source BI software like Pentaho and Jaspersoft that are not that expensive. Moreover, often to implement these BI solutions, the end client has to take the services of a software company specializing in the BI software, which is also in a way a limitation in usage.

Cross-Departmental Advantages
A well implemented BI can help a company in areas like predictive analytics, optimising investments and data driven decisions. The BI software can be implemented across all departments. 
A brief summary of its highlights in some departments are:

  •  Marketing: helps in growing its topline with features like analyzing campaign returns, promotional yields, and provide solutions to expenditure for profitable ROI, and tracking social-media marketing
  • Sales: finding the best path and practices, customer acquisition cost, and improvement in yearly turnover and sales
  • Inventory: monitoring and adjusting inventory levels
  • Human Resources: tracking and managing employee turnover, attrition rates and recruitment processes


BI solution includes the following business areas:

  • Demand Intelligence
  • Risk Intelligence
  • Asset Intelligence (AI)
  • Customer Service Intelligence    
Trends seen in BI Adoption:
  • Information Quality
  • Master Data Management (MDM)
  • Data Governance
  • Enterprise Level BI
  • Enterprise Level Data Transparency
  • Actionable Business Intelligence

Monday, 2 September 2013

DEPLOYING FLASH IN THE ENTERPRISE

Flash technology is changing the way that enterprises approach storage. After years of use in the consumer market, flash has reached a price point and level of maturity at which it is being actively deployed to address the needs of business-critical applications. Hard disk drives (HDDs) have some nagging deficiencies that make provisioning storage for applications with high-performance demands difficult. Because HDDs are capable of performing no more than 300–400 random I/O operations per second (IOPS), a storage system capable of delivering tens of thousands of IOPS requires hundreds of disks—even when the capacity is not needed. Over provisioning disks to achieve performance goals is a significant capital expense and wastes rack space, power, and cooling. High-performance workloads increasingly require 100,000 IOPS or more, further exacerbating the problem. 

Flash is quickly emerging as the preferred way to overcome the nagging performance limitations of hard disk drives. However, because flash comes at a significant price premium, outright replacement of HDDs with flash only makes sense in situations in which capacity requirements are relatively small and performance requirements are high. Deployment approaches—including hybrid storage arrays, server flash, and all-flash arrays—that combine the performance of flash with the capacity of HDDs can be cost effective for a broad range of performance requirements. Some storage companies offers a full range of flash solutions, including server flash, hybrid storage arrays, and all-flash arrays. We’ve done a careful analysis of the cost of each solution at various combinations of performance and capacity to help you understand how to choose the best solutions to address your storage challenges based on your performance needs (IOPS and latency), capacity requirements, working set size (amount of hot data), budget, and data protection objectives.  

The fastest HDDs have access times of 3–4 milliseconds, resulting in latencies much slower than flash-based SSDs, which have latencies measured in microseconds and perform thousands of IOPS per device. HDDs alone may no longer meet the needs of latency-sensitive applications.  Because of clear performance advantages coupled with significantly lower power consumption, flash SSDs and other flash devices are beginning to take the place of high-performance HDDs. However, because SSDs currently cost more than 10 times as much per GB of usable storage, IT teams are still  searching for the best strategies to deploy flash technology to deliver performance where it’s needed while minimizing overall storage costs.

There are a number of options for deploying flash in the data center:
  • Hybrid storage solutions combine the performance of flash with the capacity of HDD by targeting hot data to flash using either migration or caching.
  • Server flash solutions may provide persistent solid state storage or cache data from HDD storage onto flash devices installed in servers, delivering extremely low latency for data accessed from cache.
  • All-flash arrays provide maximum performance and a high level of consistency for business-critical applications.

Friday, 2 August 2013

WINDOWS XP AFTER APRIL,2014

April 9, 2014, is the real red-letter day in the history of Windows XP. On that day, any zero-day exploit released into the wild will run rampant on Windows XP systems while Microsoft watches and says "I told you so." When companies beg for a fix, Microsoft will hold one document in each hand: the life-cycle information for Windows XP with a Post-it note that says "You had four years to move to Windows 7," and a contract for custom support.

As Windows XP comes to the end of its life, applications in enterprise desktop and virtualization environments everywhere will feel the effects. Luckily, there are a couple things you can do if your applications depend on XP: You can use a very old Windows Server platform or jump on the virtualization bandwagon. Microsoft isn't the only company ditching XP. Not only can Microsoft wash its hands of Windows XP support, but so can all the companies that made software for XP.

Assuming those companies stopped actively developing for the OS years ago, they are likely still supporting the applications that run on it. After Windows XP end of life on April 8, 2014, they'll have no reason to continue. The implications of this reality run far and wide. Line-of-business software is surely affected, as are any of the random applications you are using. What really is cause for concern for desktop and virtualization administrators, though, is that security software vendors will likely stop patching, updating and supporting their software.

Why would companies such as McAfee, Symantec, Kaspersky or Trend Micro bother maintaining a product for an OS that is, for all intents and purposes, dead? Those applications might still run, and it could be that their definition files will be updated with the latest viruses for a time, but do you think those companies will pay attention to viruses targeted toward XP after it's gone? Probably not.

What about activation servers?
There is one other question that has yet to be answered, and that is in regards to Microsoft’s activation servers. What happens to the part of the system that activates Windows XP? Does it go offline? Is it somehow protected and only available to people that have paid for custom support? Existing machines will no doubt work just fine, but what about rebuilds or new machines?Of course, Microsoft could simply validate all existing keys and let anyone that wants to use XP use it. There’s no precedent for this because XP was the first Microsoft OS that required activation. We may just have to wait and see.The bottom line is that running Windows XP in your organization on anything other than a desktop with no network connection, floppy drive, USB ports, or CD drive is an outright liability, bordering on irresponsible. Yes, there are situations that will require it, but if you determine that your organization can't afford to get off Windows XP on the basis of cost alone, you are wrong.

Alternatives for application support
Windows Server 2003 R2 is essentially Windows XP Server, and while the Windows XP end of life date is April 8, 2014, the end of life for Server 2003 R2 comes 15 months after that: July 14, 2015. Since they are roughly the same OS, based on the same kernel, it's likely that anything you require XP for will work on Server 2003 R2 -- and that will buy you more than a year to figure things out.

There are two ways to keep apps running with this approach.
The first is to move those troublesome applications into the data center and use Remote Desktop Services (RDS) to deliver the application. This, of course, requires that the application is capable of running on a terminal server. In fact, this is also a viable means of preserving some of those oddball applications in your RDS environments.

If the application just won't work or has to run on its own system, you have the option of installing Server 2003 R2 on the physical computer. This solution is costly, because you need to purchase server licenses for each of these machines -- but it could cost significantly less than a Custom Support Agreement. Granted, it only buys you 15 months, but you can consider that to be additional motivation to switch platforms.


The bottom line is that you're running out of both time and options when it comes to removing Windows XP from your company.  If you feel more comfortable using physical desktops, go for it. Use VDI if you can (those extra management features will help in the long run); use application virtualization and user environment virtualization, too (it will make it easier to migrate OSes in the future). But, whatever you do, make sure the OS is gone by the time of Windows XP end of life next year.

Tuesday, 2 July 2013

ECONOMETRICS

Econometrics is nothing but application of mathematics, statistics, and computer science to economic data. By using the theory of Econometrics, CIOs can save costs to make a strong business case to gain management support for investments in innovation. Econometrics is the overarching business theme when an organization is trying to spur business transformation and turn the current datacenter into a next-generation information center. Convergence is the key accelerator of business velocity. Consolidation has given way to convergence to reduce opex.

Tough economic times require new perspectives and strategies for reducing the cost of infrastructure. The past several years   have left many IT organizations with over-provisioned and under-utilised IT capacity. Now, with a squeeze on capital and credit, many organizations are faced with diktats to do more with less. It is time for CIO's to understand, how they can save costs to make a strong business case to gain management support for both strategic and tactical investments.

Today, organizations should evaluate new technologies on the basis of how they can contribute to business performance. However, this is becoming increasingly difficult to do. The average IT expenditure to just keep the lights on was about 80 percent, with many of them spending over 90 percent, leaving no room for innovation. The biggest factors contributing to this is difficulty in convincing top management that a transformation project is required and that is difficult for the IT team to demonstrate the value that can be generated in terms of ROI.

One way out is applying the principles of Econometrics. Cloud computing, VM Sprawl, and capacity on-demand architectures sometimes call for a review of existing IT ecosystems especially storage. One of the first steps is to define and measure current costs. We cannot improve what we cannot measure.  This is the core of econometrics and key to providing continuous improvement of storage estate.


When seeking to control storage costs, an organization needs to determine which types of costs are most relevant to control and measure them. Reducing costs is often not simply a matter selecting products, but of designing a storage architecture that is more supportive of the organization's cost-reduction goals. Organization should use econometrics to follow the money spent on IT assets over their lifetimes, and map IT investments to business benefits and cost improvements.