Thursday, 8 May 2014

MOBILE DEVICE MANAGEMENT

What is the market Definition or Description of Mobile Device Management?

Enterprise mobile device management (MDM) software is:

(1)  A policy and configuration management tool for mobile handheld devices (smartphones and tablets based on smartphone OSs), and

(2)  An enterprise mobile solution for securing and enabling enterprise users and content. It helps enterprises manage the transition to a more complex mobile computing and communications environment by supporting security, network services, and software and hardware management across multiple OS platforms and now sometimes laptop and ultrabooks. This is especially important as bring your own device (BYOD) initiatives and advanced wireless computing becomes the focus of many enterprises. MDM can support corporate-owned as well as personal devices, and helps support a more complex and heterogeneous environment.

Criteria to consider when choosing an MDM solution:

Internal resources for management — Most MDM purchases are 500 devices or fewer. The size of the company doesn't really matter here as much as the internal resource capabilities to manage devices.

Complexity of data — Gartner's position is that any enterprise data needs to be protected and managed. MDM is a start, by enforcing enterprise policy around encryption and authentication.  Containers should be used to manage email and other mobile content, like file sharing, or enterprise apps, like sales force automation (SFA). These are also delivered by MDM vendors.

Cross-platform needs - More than ever, companies will begin to support multiple OSs. Although today Apple dominates smartphone sales in the enterprise, users will want to bring a variety of other devices to work that MDM providers can manage in an integrated fashion. Once your company has such a diverse environment, MDM becomes a necessity.

Delivery — Companies need to decide on whether they want MDM on-premises or in a SaaS/cloud model. SMBs prefer the SaaS model because it reduces the cost and total cost of ownership, based on having hardware to support fewer users. Large companies that are comfortable with the cloud model, usually in non-regulated markets, also are moving toward SaaS. In a global, highly distributed environment, they also like the appeal of the reduction in hardware and server management that cloud brings, versus on-premises servers. MDM managed services are also emerging, but are currently limited in scope and adopt.

Caution/factors before moving onto MDM:

Most companies started out using EAS to manage their devices, but found it lacking in the following areas, which pushed them to purchase a more complete MDM suite:

Volume of devices: It is difficult to manage a larger volume of devices on EAS. Once companies got to more than 500 devices, they typically looked for a more complete MDM suite.

Mix of platforms: Companies that had two or more mobile OS platforms to manage found it difficult to do so on EAS.

Granular support/policy: More complete MDM systems offer a deeper management capability, with more-detailed policies. For example, EAS allows passwords to be enforced (depending on the mobile OS), but more-comprehensive MDM systems allow more flexibility in the password type, length and complexity.

Reporting: EAS is very weak on device reporting. Companies that wanted better reporting moved to more complete MDM systems.

Ability to block certain device platforms: Companies may want to restrict the types of mobile OSs they will support.

Need to identify rooted/jail broken devices: There is concern over rooted or jail broken devices because companies cannot control their data if devices are compromised.


Advanced capabilities to manage mobile apps: Application provisioning and updating are important to companies today.

Wednesday, 9 April 2014

7Vs OF BIG DATA

Big Data is a big thing. It will change our world completely and is not a passing fad that will go away. To understand the phenomenon that is big data, it is often described using seven Vs: Volume, Velocity, Variety, Veracity, Value, Versatility and Validity.

Volume refers to the vast amounts of data generated every second. Just think of all the emails, twitter messages, photos, video clips, sensor data etc. we produce and share every second. We are not talking Terabytes but Zettabytes or Brontobytes. On Facebook alone we send 10 billion messages per day, click the "like' button 4.5 billion times and upload 350 million new pictures each and every day. If we take all the data generated in the world between the beginning of time and 2008, the same amount of data will soon be generated every minute! This increasingly makes data sets too large to store and analyse using traditional database technology. With big data technology we can now store and use these data sets with the help of distributed systems, where parts of the data is stored in different locations and brought together by software.

Velocity refers to the speed at which new data is generated and the speed at which data moves around. Just think of social media messages going viral in seconds, the speed at which credit card transactions are checked for fraudulent activities, or the milliseconds it takes trading systems to analyse social media networks to pick up signals that trigger decisions to buy or sell shares. Big data technology allows us now to analyse the data while it is being generated, without ever putting it into databases.

Variety refers to the different types of data we can now use. In the past we focused on structured data that neatly fits into tables or relational databases, such as financial data (e.g. sales by product or region). In fact, 80% of the world’s data is now unstructured, and therefore can’t easily be put into tables (think of photos, video sequences or social media updates). With big data technology we can now harness differed types of data (structured and unstructured) including messages, social media conversations, photos, sensor data, video or voice recordings and bring them together with more traditional, structured data.

Veracity refers to the messiness or trustworthiness of the data. With many forms of big data, quality and accuracy are less controllable (just think of Twitter posts with hash tags, abbreviations, typos and colloquial speech as well as the reliability and accuracy of content) but big data and analytics technology now allows us to work with these type of data. The volumes often make up for the lack of quality or accuracy.

Value: Then there is another V to take into account when looking at Big Data: Value! It is all well and good having access to big data but unless we can turn it into value it is useless. So you can safely argue that 'value' is the most important V of Big Data. It is important that businesses make a business case for any attempt to collect and leverage big data. It is so easy to fall into the buzz trap and embark on big data initiatives without a clear understanding of costs and benefits.

Versatility: This explains how the data can be used and states their capabilities.

Validity: Possible to give both a scale of data range and prove of data

Wednesday, 5 March 2014

ZERO CLIENT Vs THIN CLIENT

WHILE THE TERM zero client is something of a marketing buzzword, it is a useful way of differentiating options for the devices that are used to access desktops. A zero client is similar to a thin client in its purpose—accessing a desktop in a data center—but requires a lot less configuration.

Zero clients tend to be small and simple devices with a standard set of features that support the majority of users. They also tend to be dedicated to one data center desktop product and remote display protocol. Typically, configuration is simple—a couple of dozen settings at the most, compared to the thousands of settings you see in a desktop operating system. Zero clients load their simple configuration from the network every time they are powered on; the zero clients at a site will all be the same. Zero clients support access to a variety of desktop types, terminal services, virtual desktop infrastructure (VDI) or dedicated rack mount or blade workstations.

The basic premise of zero clients is that the device on the user’s desk doesn’t have any persistent configuration. Instead, it learns how to provide access to the desktop from the network every time it starts up. This gives a lot of operational benefits, since the zero client devices are never unique. This contrasts with a thin client, which may have local applications installed and will hold its configuration on persistent storage in the device.

Thin clients became a mainstream product class shortly after Microsoft introduced Windows terminal Server and Citrix launched MetaFrame, both in 1998. To enter this market, PC manufacturers cut down their desktop hardware platforms. They repurposed their PC management tools, reusing as much technology as possible from existing PC business. This meant that a fairly customized Windows or Linux setup could be oriented toward being a thin client.

Over time optional features for USB redirection, a local Web browser, VOIP integration agents and multi-monitor display support were added. Each additional feature adds configuration and complexity to the thin client. After a few years, thin clients are really small PCs. Some even have PCI or PC Card slots added. These thicker thin clients get quite close to a full PC in terms of capabilities and complexity. Instead of simplifying management, IT administrators now needed to manage the device on the user’s desk as well as in the data center. Zero clients, then, are a return to the simpler devices on user’s desks—with simpler management.

Zero clients are much simpler to manage, configure and update. Zero client firmware images are a few megabytes, compared with the multiple gigabytes that thin client operating systems take up. The update process itself is much quicker and less intrusive on a zero client, possibly occurring every day when the client boots.

Thin clients need to be patched and updated as often as the desktop operating system they carry; since zero clients have no operating system, they need less frequent updates. Zero clients have few knobs and switches to turn—probably fewer than 100 configuration items in total—so they are simple to manage. Often, their bulk management is a couple of text files on a network share. Thin clients have a whole operating system to manage, with tens of thousands of settings necessitating complex management applications, usually on dedicated servers at multiple sites. A zero client is like a toaster. A consumer can take it out of its packaging and make it work. If the consumer is an employee at a remote branch, there are benefits to having that worker be able to deploy a new terminal. Sometimes, thin clients need special builds or customized settings applied to them before they are deployed. This obviously is not ideal for rapid deployment. The ability to rapidly scale can be important when it comes to something like opening a call center to accommodate an advertising campaign or a natural disaster response. Zero clients have lower power consumption. Thin clients have mainstream CPUs and often graphics processing units, but a zero client usually has a low-power CPU (or none at all), which cuts down on power consumption and heat generation.


The simplicity of zero clients also makes for a much smaller attack surface, so placing them in less trusted networks is not so worrying. Also, putting them in physically hostile locations is safe; lower power and usually passive cooling mean that heat, dust and vibration are less likely to cause maintenance problems. Zero clients are all the same. Models are released every couple of years rather than every few months, so your fleet will contain fewer models. That means there’s no need for help desk calls to move a device from one desk to another. Plus the user experience is consistent. Your supplier’s inventory of zero clients will also have fewer models, which should lead to better availability when you need new zero clients.

Monday, 17 February 2014

DATA PRIVACY/PROTECTION

High-profile security failures have made privacy protection a top of-mind issue for many organisations. In several cases, hackers have gained access to online networks and systems, stealing personal customer data such as names, addresses, passwords. The financial costs of these breaches are often significant, ranging from tens of thousands to millions. The damage to a company’s brand and its reputation often costs far more. When we think of cyber risk we tend to think of security breaches, but when we look at it through a privacy lens, the range of risks broadens significantly.

As IT organizations move toward virtualization, cloud computing and IT-as-a-service, data protection will undergo a fundamental shift. The underpinnings of this transformation include a change from one-size-fits-all backup to a data protection offering that matches service levels with application requirements. IT organizations would be wise to bring in outside help to navigate through this transition.

There are several issues that an outside consultant can help manage, including:

ROI: The business justification of data protection as a service – data protection is still viewed as insurance and a quality risk assessment and business impact analysis from an outsider can have a meaningful impact with upper management.

Training and Education: Organizations have an opportunity to re-skill staff and gain increased leverage by developing data protection approaches that free up existing personnel. As discussed, however, new approaches will require new mindsets and existing staff will have to be educated and in some cases re-deployed on other tasks.

Architecture: Data protection is not trivial. Virtualization complicates the process and creates IO storms. Architecting data protection solutions and a services-oriented approach that is efficient and streamlined can be more effectively accomplished with outside help. Don’t be afraid to ask.

Customers want choices and ease of access, which requires them to provide personal information and preferences, businesses want to be able to gather, data mine and share this information efficiently. Certain industries such as financial services and health-care, often draw the most attention in the privacy discussion because of the personal information they possess. However, all industries are affected by privacy and data protection requirements. Confirm the organisation does not have misplaced or invented reliance on third party providers that have access to the organisation's own information or that of its customers. Design and implement robust monitoring and testing of privacy and data protection risks and related controls. Most companies have developed and implemented privacy and data protection programs, yet many of these programs fall short for a variety of reasons, including lack of understanding the risk landscape related to information collections and transmittal, inadequate organisational policies, insufficient training and unverified third party providers, among many others.


The bottom line is data protection is changing from a one-size-fits-all exercise that is viewed as expensive insurance to more of a service-oriented solution that can deliver tangible value to the business by clearly reducing risk at a price that is aligned with business objectives. Understanding data protection in a holistic fashion from backup, recovery, disaster recovery, archiving, and security; and as part of IT-as-a-service is not only good practice, it can be good for your bottom line.

Monday, 20 January 2014

NET NEUTRALITY

What is net neutrality?
Net neutrality is an idea derived from how telephone lines have worked since the beginning of the 20th century. In case of a telephone line, you can dial any number and connect to it. It does not matter if you are calling from operator A to operator B. It doesn't matter if you are calling a restaurant or a drug dealer. The operators neither block the access to a number nor deliberately delay connection to a particular number, unless forced by the law. Most of the countries have rules that ask telecom operators to provide an unfiltered and unrestricted phone service.

When the internet started to take off in 1980s and 1990s, there were no specific rules that asked that internet service providers (ISPs) should follow the same principle. But, mostly because telecom operators were also ISPs, they adhered to the same principle. This principle is known as net neutrality. An ISP does not control the traffic that passes its servers. When a web user connects to a website or web service, he or she gets the same speed. Data rate for Youtube videos and Facebook photos is theoretically same. Users can access any legal website or web service without any interference from an ISP.

How did net neutrality shape the internet?

Net neutrality has shaped the internet in two fundamental ways.

One, web users are free to connect to whatever website or service they want. ISPs do not bother with what kind of content is flowing from their servers. This has allowed the internet to grow into a truly global network and has allowed people to freely express themselves. For example, you can criticize your ISP on a blog post and the ISP will not restrict access to that post for its other subscribers even though the post may harm its business.

But more importantly, net neutrality has enabled a level playing field on the internet. To start a website, you don't need lot of money or connections. Just host your website and you are good to go. If your service is good, it will find favour with web users. Unlike the cable TV where you have to forge alliances with cable connection providers to make sure that your channel reaches viewers, on internet you don't have to talk to ISPs to put your website online. This has led to creation Google, Facebook, Twitter and countless other services. All of these services had very humble beginnings. They started as a basic websites with modest resources. But they succeeded because net neutrality allowed web users to access these websites in an easy and unhindered way.

What will happen if there is no net neutrality? 

If there is no net neutrality, ISPs will have the power (and inclination) to shape internet traffic so that they can derive extra benefit from it. For example, several ISPs believe that they should be allowed to charge companies for services like YouTube and Netflix because these services consume more bandwidth compared to a normal website. Basically, these ISPs want a share in the money that YouTube or Netflix make. 

Without net neutrality, the internet as we know it will not exist. Instead of free access, there could be "package plans" for consumers. For example, if you pay Rs 500, you will only be able to access websites based in India. To access international websites, you may have to pay a more. Or maybe there can be different connection speed for different type of content, depending on how much you are paying for the service and what "add-on package" you have bought. 

Lack of net neutrality, will also spell doom for innovation on the web. It is possible that ISPs will charge web companies to enable faster access to their websites. Those who don't pay may see that their websites will open slowly. This means bigger companies like Google will be able to pay more to make access to Youtube or Google+ faster for web users but a startup that wants to create a different and better video hosting site may not be able to do that. 

Will the concept of net neutrality survive?

Net neutrality is sort of gentlemen's agreement. It has survived so far because few people realized the potential of internet when it took off around 30 years ago. But now when the internet is an integral part of the society and incredibly important, ISPs across the world are trying to get the power to shape and control the traffic. But there are ways to keep net neutrality alive. 

Consumers should demand that ISPs continue their hands-off approach from the internet traffic. If consumers see a violation of net neutrality, they ought to take a proactive approach and register their displeasure with the ISP. They should also reward ISPs that uphold the net neutrality

Monday, 2 December 2013

MOBILE MALWARE

Mobile malware has emerged as a real and significant problem. Addressing it is no longer optional. As with other IT security risks, technology isn’t a silver bullet, but it is a key component of a holistic solution that also incorporates people and process.

A mobile virus is malicious software that targets mobile phones or wireless-enabled PDAs,thereby may causing the collapse of system and loss or leak of confidential information.The insidious objectives of mobile malware range from spying to keylogging, from text messaging to phishing, from unwanted marketing to outright fraud.

Fifty-nine percent of IT and security professionals surveyed by the Ponemon Institute recently said mobile devices are increasing the prevalence of malware infections within their organizations. This is no shock: the extraordinary growth of mobile platforms has madethem an irresistible target. The only surprise would have been if these devices had escaped attack.

Years ago, PC malware exploded when Windows achieved dominance. Something similar
is occurring with mobile. As the mobile marketplace has grown and evolved, the Android platform has become dominant. Worldwide, 70% of new smartphones now run Android, with iOS running a distant second. (Microsoft’s Windows Phone 8 platform offers promise, but hasn’t yet achieved significant market penetration.)

The Android platform’s openness has made it attractive to users, device manufacturers,carriers, app developers and to malware creators. That’s where they’re focused..

In BYOD arrangements, mobile devices are often owned by users, who act as defacto administrators. Users typically decide which apps to run, and where to get them.Wider smartphone and tablet usage is often correlated with a loss of organizational control.And that, in turn, can compromise security in multiple ways. This is why some organizations are pursuing choose your own device (CYOD) approaches, where users get to pick their devices from a list the company is prepared to support, will continue to own, and plans to centrally administer. Of course, CYOD isn’t always an option, and many organizations have chosen to accept the tradeoffs associated with full BYOD.

Mobile malware risks
Organizations evaluating mobile malware risks should assess each of the ways it can damage them, including the following.

Productivity losses: Some forms of malware inconvenience users through aggressive advertising, prevent mobile devices from working properly, and increase support costs.

Direct costs: Some forms of malware and potentially unwanted applications (PUAs) have direct costs by utilizing paid mobile services such as SMS, with or without the user’s awareness or understanding.

Security, privacy, and compliance risks: Mobile malware can compromise corporate and customer data, systems, and assets that must be protected—placing the organization at competitive, reputational and legal risk.

Some mobile malware and PUAs merely annoy and frustrate. Yet as a whole, mobile malware and PUAs represent a significant and growing problem.

Sunday, 3 November 2013

GREEN COMPUTING

Driven by rising electricity costs, green legislation and corporate social responsibility, green IT is increasingly on many IT professionals’ minds, particularly for the power-hungry data centre. Whatever the reasons, experts say that in the long run, having an energy-efficient data centre helps the environment and also saves businesses money.

Technologies that can help data-centre become green:

Data centre infrastructure management:
Experts rate data centre infrastructure management (Dcim) tools as one of the coolest technologies that can help companies make their infrastructure energy-efficient and green. Until 2009, Dcim had virtually no market penetration, but today it is one of the most significant areas of green computing. Dcim brings together standalone functions such as data centre design, asset discovery, systems management functions, capacity planning and energy management to provide a holistic view of the data centre, ranging from the rack or cabinet level to the cooling infrastructure and energy utilisation. it helps encourage the efficient use of energy, optimise equipment layouts, support virtualisation and consolidation, and improve data centre availability.

Free air cooling
Data centre power use is high on the agenda for most data centre developers. Energy costs have become the largest single element in the data centre’s total cost of ownership (tco) – ranging from 20% to 60% depending on the facility’s business model and as energy prices (and/or taxes) rise, the share of the total cost will only become larger. Free or natural air cooling is the practice of using outside air to cool data-centre facilities rather than running power-hungry mechanical refrigeration or air-conditioning units.

Low-power servers
Data centre operators are looking for more efficient alternatives to the current x86 standard server racks and blades to make their infrastructure sustainable in the long term. On-site wind generation or use of renewable energy.A number of large businesses, including Apple, Facebook and Google, are taking initiatives to power their data centres using wind energy.

Data centre consolidation and virtualisation
Virtualisation and data centre consolidation strategies help enterprises streamline it resources and utilise the untapped processing power of high-power server and storage devices. The combination of virtualisation, low-latency and high-bandwidth network  connectivity and specialised servers has the potential to slash data centre capital costs and improve energy efficiency.

Cloud computing
Cloud computing can help enterprises in their green it efforts, since a computing cloud offers higher CPU utilisation.

Energy-efficient cooling in the data centre
Many data centres are being run against old-style environmental designs, where the approach to cooling is based around ensuring that input cooling air is at such a low temperature that outlet air does not exceed a set temperature in many cases, the aim has been to keep the average volumetric temperature in the data centre around 20°c or lower with some running at between 15°c and 17°c.

The other technologies that can help data centre become green are:

Optimising airflow for maximum cooling
Increasing a data centre’s thermal envelope