Friday, 11 July 2014

ROOTING ANDROID DEVICES

Users root their Android devices because it gives them unlimited access, but system administrators can root devices, too, then analyze everything that's happening on the phone. Rooting Android devices isn't for everyone, however.

Before you even start thinking about rooting Android devices, you should know that no vendor builds devices that are easy to root -- and for good reason. If users have complete access, they may change settings or permissions that cause their devices to stop working. And whether you or your users hack into devices, it will void the warranty on the device, the owner will lose support from the manufacturer, and in the worst case, it could cause the device to lose functionality completely. Some devices are even programmed to reset themselves to the original operating system upon rebooting, which means that you'll need to root the device over and over again.

To root or not to root?
So should you root Android devices? It depends.
If you need to thoroughly analyze app behavior on devices or build Android apps, it might make sense for you to root one device to test or build the apps before deploying them to all users. In this case, you could root an Android smartphone that you use only for development and testing. If neither of these situations apply to you, however, you're better of letting users' Android devices run their original operating systems

Rooting requirements

If you do decide to go for it, just know that rooting Android devices requires a ROM that is specifically made for the device you're trying to root. You upload that ROM to the device to replace the default operating system, but there are many different Android devices and OS versions, and even small hardware changes make it so a ROM for a similar device won't work on the exact device you're rooting. If you're planning to root a device so you can build and test applications, make sure to choose a device that has a ROM available.

You should also use a ROM that is from a trusted origin. It's not a good idea to just download any ROM from the Internet, because the ROM maker could have put anything in it, such as backdoors or other security holes. To avoid this problem, use a ROM from a source that is open enough to be verified, such as CyanogenMod.

In addition to getting the right ROM, you'll also need to prepare the device itself. Typically, that involves accessing the Settings menu on the device and selecting the USB debugging option from the Developer options. This allows you to work on the device when it is connected to the computer.

You also need to prepare a computer so that it can see the device. For a Windows computer, that means installing USB drivers for the specific Android device you're trying to root. If you're using a Linux computer, you'll have to create udev rules, but that is a fairly complicated task.

Once the computer is ready, you need to connect the device to the computer via USB, then load the ROM image onto the device. This is the all-or-nothing moment: If the root attempt fails, the phone will be "bricked" and you won't be able to do anything with it anymore. From a corporate perspective, the main problem with this procedure is not the risk of bricking a phone, but the absence of a good source for the ROMs that can be used for rooting.


If the process works, the phone will be rooted and you'll get complete access to it. To work in the root environment, you'll need a remote control tool such as the Android Debug Bridge developer tools. Alternatively, you could install an app on the rooted phone -- a shell environment or a full-featured app such as TWRP Manager -- that allows you to work on the device with root access.

Wednesday, 11 June 2014

MOBILE INSTANT MESSAGING

Is mobile instant messaging the next big thing in the enterprise? Along with the proliferation of enterprise mobile apps, the answer could be yes.

By 2016, 4.3 billion people will have an email account -- and the same number of people should have an instant messaging (IM) account. Still, it is hard to say that mobile IM will completely take over mobile email.

Why is enterprise IT concerned about the rise of mobile instant messaging? Any written communication has the chance to expose the company to the risk of losing data. Judging by the sheer amount of free options for instant messaging, you can see how IT is worried that it cannot control company-related conversations happening on mobile devices.

Who are the mobile IM players?
You can put mobile instant messaging into four categories: third-party (free or mostly free apps), native messaging, enterprise mobility management and company infrastructure tools.

Third-party or free apps consist of apps such as Facebook, Snapchat, Viber and WhatsApp. This category also consists of new players that are focused on the enterprise, such as Tigertext and Cotap.

Native apps consist of the native instant messaging that comes with your mobile device, such as BBM for Blackberry, Messages for Apple and Google talk for Android.

Enterprise mobility management tools have some of the same characteristics as Tigertext and Cotap in that they encrypt their IM strings and keep the company instant messaging sessions separate from the native device instant messaging sessions.

Company infrastructure tools now have mobile apps to accompany them, too. Tools such as Lync, Yammer and IBM Sametime have apps that support their instant messaging infrastructure. These apps have traditionally offered far less of a usable experience than their desktop counterparts but are getting better with each upgrade.

Do users want mobile instant messaging?
Just like breaking down use of a tablet vs. smartphone vs. laptop, different communication methods have different purposes. If you want a more formal communication that reaches many people, with attachments and a way to let the recipient read the message on their own terms, then email is your method. If you want a quick conversation that warrants a quick response, then IM is your method. Add the fact that Millennials may be 30-50% of your workforce in a few years, and mobile IM may be the standard form of communication in the future.

So would employees of your company even use the enterprise-provided instant messaging option, or would they just use their native messaging platform or an app like Snapchat or Viber?

This is the same argument as the one regarding company-offered productivity apps vs. someone’s personal productivity apps. As an enterprise, if you offer a tool that is easy to use and you provide the proper training, your employees will be more likely to use these tools. Sometimes, instant messaging does not fall under a guideline or policy, nor do users know if they are even using the company-provided tool.

Several of my co-workers have contacts set up in either their company EMM contact list or their native contact list. When they select a name to send an IM, they gravitate toward whatever is easier for them. Usually, it ends up being the native client because they find that more natural.

Do users need an enterprise mobile instant messaging tool? Yes. Will enterprises adopt a tool outside their EMM or infrastructure options? No. The consumerization of IT is still very new for most organizations, and they are focused on managing devices, email and apps. Most organizations will wait for Microsoft and the EMM tools to keep improving their apps to provide an IM tool for their employees. Otherwise, a large breach resulting in data loss will spur on optimal tools for enterprise mobile instant messaging.

Thursday, 8 May 2014

MOBILE DEVICE MANAGEMENT

What is the market Definition or Description of Mobile Device Management?

Enterprise mobile device management (MDM) software is:

(1)  A policy and configuration management tool for mobile handheld devices (smartphones and tablets based on smartphone OSs), and

(2)  An enterprise mobile solution for securing and enabling enterprise users and content. It helps enterprises manage the transition to a more complex mobile computing and communications environment by supporting security, network services, and software and hardware management across multiple OS platforms and now sometimes laptop and ultrabooks. This is especially important as bring your own device (BYOD) initiatives and advanced wireless computing becomes the focus of many enterprises. MDM can support corporate-owned as well as personal devices, and helps support a more complex and heterogeneous environment.

Criteria to consider when choosing an MDM solution:

Internal resources for management — Most MDM purchases are 500 devices or fewer. The size of the company doesn't really matter here as much as the internal resource capabilities to manage devices.

Complexity of data — Gartner's position is that any enterprise data needs to be protected and managed. MDM is a start, by enforcing enterprise policy around encryption and authentication.  Containers should be used to manage email and other mobile content, like file sharing, or enterprise apps, like sales force automation (SFA). These are also delivered by MDM vendors.

Cross-platform needs - More than ever, companies will begin to support multiple OSs. Although today Apple dominates smartphone sales in the enterprise, users will want to bring a variety of other devices to work that MDM providers can manage in an integrated fashion. Once your company has such a diverse environment, MDM becomes a necessity.

Delivery — Companies need to decide on whether they want MDM on-premises or in a SaaS/cloud model. SMBs prefer the SaaS model because it reduces the cost and total cost of ownership, based on having hardware to support fewer users. Large companies that are comfortable with the cloud model, usually in non-regulated markets, also are moving toward SaaS. In a global, highly distributed environment, they also like the appeal of the reduction in hardware and server management that cloud brings, versus on-premises servers. MDM managed services are also emerging, but are currently limited in scope and adopt.

Caution/factors before moving onto MDM:

Most companies started out using EAS to manage their devices, but found it lacking in the following areas, which pushed them to purchase a more complete MDM suite:

Volume of devices: It is difficult to manage a larger volume of devices on EAS. Once companies got to more than 500 devices, they typically looked for a more complete MDM suite.

Mix of platforms: Companies that had two or more mobile OS platforms to manage found it difficult to do so on EAS.

Granular support/policy: More complete MDM systems offer a deeper management capability, with more-detailed policies. For example, EAS allows passwords to be enforced (depending on the mobile OS), but more-comprehensive MDM systems allow more flexibility in the password type, length and complexity.

Reporting: EAS is very weak on device reporting. Companies that wanted better reporting moved to more complete MDM systems.

Ability to block certain device platforms: Companies may want to restrict the types of mobile OSs they will support.

Need to identify rooted/jail broken devices: There is concern over rooted or jail broken devices because companies cannot control their data if devices are compromised.


Advanced capabilities to manage mobile apps: Application provisioning and updating are important to companies today.

Wednesday, 9 April 2014

7Vs OF BIG DATA

Big Data is a big thing. It will change our world completely and is not a passing fad that will go away. To understand the phenomenon that is big data, it is often described using seven Vs: Volume, Velocity, Variety, Veracity, Value, Versatility and Validity.

Volume refers to the vast amounts of data generated every second. Just think of all the emails, twitter messages, photos, video clips, sensor data etc. we produce and share every second. We are not talking Terabytes but Zettabytes or Brontobytes. On Facebook alone we send 10 billion messages per day, click the "like' button 4.5 billion times and upload 350 million new pictures each and every day. If we take all the data generated in the world between the beginning of time and 2008, the same amount of data will soon be generated every minute! This increasingly makes data sets too large to store and analyse using traditional database technology. With big data technology we can now store and use these data sets with the help of distributed systems, where parts of the data is stored in different locations and brought together by software.

Velocity refers to the speed at which new data is generated and the speed at which data moves around. Just think of social media messages going viral in seconds, the speed at which credit card transactions are checked for fraudulent activities, or the milliseconds it takes trading systems to analyse social media networks to pick up signals that trigger decisions to buy or sell shares. Big data technology allows us now to analyse the data while it is being generated, without ever putting it into databases.

Variety refers to the different types of data we can now use. In the past we focused on structured data that neatly fits into tables or relational databases, such as financial data (e.g. sales by product or region). In fact, 80% of the world’s data is now unstructured, and therefore can’t easily be put into tables (think of photos, video sequences or social media updates). With big data technology we can now harness differed types of data (structured and unstructured) including messages, social media conversations, photos, sensor data, video or voice recordings and bring them together with more traditional, structured data.

Veracity refers to the messiness or trustworthiness of the data. With many forms of big data, quality and accuracy are less controllable (just think of Twitter posts with hash tags, abbreviations, typos and colloquial speech as well as the reliability and accuracy of content) but big data and analytics technology now allows us to work with these type of data. The volumes often make up for the lack of quality or accuracy.

Value: Then there is another V to take into account when looking at Big Data: Value! It is all well and good having access to big data but unless we can turn it into value it is useless. So you can safely argue that 'value' is the most important V of Big Data. It is important that businesses make a business case for any attempt to collect and leverage big data. It is so easy to fall into the buzz trap and embark on big data initiatives without a clear understanding of costs and benefits.

Versatility: This explains how the data can be used and states their capabilities.

Validity: Possible to give both a scale of data range and prove of data

Wednesday, 5 March 2014

ZERO CLIENT Vs THIN CLIENT

WHILE THE TERM zero client is something of a marketing buzzword, it is a useful way of differentiating options for the devices that are used to access desktops. A zero client is similar to a thin client in its purpose—accessing a desktop in a data center—but requires a lot less configuration.

Zero clients tend to be small and simple devices with a standard set of features that support the majority of users. They also tend to be dedicated to one data center desktop product and remote display protocol. Typically, configuration is simple—a couple of dozen settings at the most, compared to the thousands of settings you see in a desktop operating system. Zero clients load their simple configuration from the network every time they are powered on; the zero clients at a site will all be the same. Zero clients support access to a variety of desktop types, terminal services, virtual desktop infrastructure (VDI) or dedicated rack mount or blade workstations.

The basic premise of zero clients is that the device on the user’s desk doesn’t have any persistent configuration. Instead, it learns how to provide access to the desktop from the network every time it starts up. This gives a lot of operational benefits, since the zero client devices are never unique. This contrasts with a thin client, which may have local applications installed and will hold its configuration on persistent storage in the device.

Thin clients became a mainstream product class shortly after Microsoft introduced Windows terminal Server and Citrix launched MetaFrame, both in 1998. To enter this market, PC manufacturers cut down their desktop hardware platforms. They repurposed their PC management tools, reusing as much technology as possible from existing PC business. This meant that a fairly customized Windows or Linux setup could be oriented toward being a thin client.

Over time optional features for USB redirection, a local Web browser, VOIP integration agents and multi-monitor display support were added. Each additional feature adds configuration and complexity to the thin client. After a few years, thin clients are really small PCs. Some even have PCI or PC Card slots added. These thicker thin clients get quite close to a full PC in terms of capabilities and complexity. Instead of simplifying management, IT administrators now needed to manage the device on the user’s desk as well as in the data center. Zero clients, then, are a return to the simpler devices on user’s desks—with simpler management.

Zero clients are much simpler to manage, configure and update. Zero client firmware images are a few megabytes, compared with the multiple gigabytes that thin client operating systems take up. The update process itself is much quicker and less intrusive on a zero client, possibly occurring every day when the client boots.

Thin clients need to be patched and updated as often as the desktop operating system they carry; since zero clients have no operating system, they need less frequent updates. Zero clients have few knobs and switches to turn—probably fewer than 100 configuration items in total—so they are simple to manage. Often, their bulk management is a couple of text files on a network share. Thin clients have a whole operating system to manage, with tens of thousands of settings necessitating complex management applications, usually on dedicated servers at multiple sites. A zero client is like a toaster. A consumer can take it out of its packaging and make it work. If the consumer is an employee at a remote branch, there are benefits to having that worker be able to deploy a new terminal. Sometimes, thin clients need special builds or customized settings applied to them before they are deployed. This obviously is not ideal for rapid deployment. The ability to rapidly scale can be important when it comes to something like opening a call center to accommodate an advertising campaign or a natural disaster response. Zero clients have lower power consumption. Thin clients have mainstream CPUs and often graphics processing units, but a zero client usually has a low-power CPU (or none at all), which cuts down on power consumption and heat generation.


The simplicity of zero clients also makes for a much smaller attack surface, so placing them in less trusted networks is not so worrying. Also, putting them in physically hostile locations is safe; lower power and usually passive cooling mean that heat, dust and vibration are less likely to cause maintenance problems. Zero clients are all the same. Models are released every couple of years rather than every few months, so your fleet will contain fewer models. That means there’s no need for help desk calls to move a device from one desk to another. Plus the user experience is consistent. Your supplier’s inventory of zero clients will also have fewer models, which should lead to better availability when you need new zero clients.

Monday, 17 February 2014

DATA PRIVACY/PROTECTION

High-profile security failures have made privacy protection a top of-mind issue for many organisations. In several cases, hackers have gained access to online networks and systems, stealing personal customer data such as names, addresses, passwords. The financial costs of these breaches are often significant, ranging from tens of thousands to millions. The damage to a company’s brand and its reputation often costs far more. When we think of cyber risk we tend to think of security breaches, but when we look at it through a privacy lens, the range of risks broadens significantly.

As IT organizations move toward virtualization, cloud computing and IT-as-a-service, data protection will undergo a fundamental shift. The underpinnings of this transformation include a change from one-size-fits-all backup to a data protection offering that matches service levels with application requirements. IT organizations would be wise to bring in outside help to navigate through this transition.

There are several issues that an outside consultant can help manage, including:

ROI: The business justification of data protection as a service – data protection is still viewed as insurance and a quality risk assessment and business impact analysis from an outsider can have a meaningful impact with upper management.

Training and Education: Organizations have an opportunity to re-skill staff and gain increased leverage by developing data protection approaches that free up existing personnel. As discussed, however, new approaches will require new mindsets and existing staff will have to be educated and in some cases re-deployed on other tasks.

Architecture: Data protection is not trivial. Virtualization complicates the process and creates IO storms. Architecting data protection solutions and a services-oriented approach that is efficient and streamlined can be more effectively accomplished with outside help. Don’t be afraid to ask.

Customers want choices and ease of access, which requires them to provide personal information and preferences, businesses want to be able to gather, data mine and share this information efficiently. Certain industries such as financial services and health-care, often draw the most attention in the privacy discussion because of the personal information they possess. However, all industries are affected by privacy and data protection requirements. Confirm the organisation does not have misplaced or invented reliance on third party providers that have access to the organisation's own information or that of its customers. Design and implement robust monitoring and testing of privacy and data protection risks and related controls. Most companies have developed and implemented privacy and data protection programs, yet many of these programs fall short for a variety of reasons, including lack of understanding the risk landscape related to information collections and transmittal, inadequate organisational policies, insufficient training and unverified third party providers, among many others.


The bottom line is data protection is changing from a one-size-fits-all exercise that is viewed as expensive insurance to more of a service-oriented solution that can deliver tangible value to the business by clearly reducing risk at a price that is aligned with business objectives. Understanding data protection in a holistic fashion from backup, recovery, disaster recovery, archiving, and security; and as part of IT-as-a-service is not only good practice, it can be good for your bottom line.

Monday, 20 January 2014

NET NEUTRALITY

What is net neutrality?
Net neutrality is an idea derived from how telephone lines have worked since the beginning of the 20th century. In case of a telephone line, you can dial any number and connect to it. It does not matter if you are calling from operator A to operator B. It doesn't matter if you are calling a restaurant or a drug dealer. The operators neither block the access to a number nor deliberately delay connection to a particular number, unless forced by the law. Most of the countries have rules that ask telecom operators to provide an unfiltered and unrestricted phone service.

When the internet started to take off in 1980s and 1990s, there were no specific rules that asked that internet service providers (ISPs) should follow the same principle. But, mostly because telecom operators were also ISPs, they adhered to the same principle. This principle is known as net neutrality. An ISP does not control the traffic that passes its servers. When a web user connects to a website or web service, he or she gets the same speed. Data rate for Youtube videos and Facebook photos is theoretically same. Users can access any legal website or web service without any interference from an ISP.

How did net neutrality shape the internet?

Net neutrality has shaped the internet in two fundamental ways.

One, web users are free to connect to whatever website or service they want. ISPs do not bother with what kind of content is flowing from their servers. This has allowed the internet to grow into a truly global network and has allowed people to freely express themselves. For example, you can criticize your ISP on a blog post and the ISP will not restrict access to that post for its other subscribers even though the post may harm its business.

But more importantly, net neutrality has enabled a level playing field on the internet. To start a website, you don't need lot of money or connections. Just host your website and you are good to go. If your service is good, it will find favour with web users. Unlike the cable TV where you have to forge alliances with cable connection providers to make sure that your channel reaches viewers, on internet you don't have to talk to ISPs to put your website online. This has led to creation Google, Facebook, Twitter and countless other services. All of these services had very humble beginnings. They started as a basic websites with modest resources. But they succeeded because net neutrality allowed web users to access these websites in an easy and unhindered way.

What will happen if there is no net neutrality? 

If there is no net neutrality, ISPs will have the power (and inclination) to shape internet traffic so that they can derive extra benefit from it. For example, several ISPs believe that they should be allowed to charge companies for services like YouTube and Netflix because these services consume more bandwidth compared to a normal website. Basically, these ISPs want a share in the money that YouTube or Netflix make. 

Without net neutrality, the internet as we know it will not exist. Instead of free access, there could be "package plans" for consumers. For example, if you pay Rs 500, you will only be able to access websites based in India. To access international websites, you may have to pay a more. Or maybe there can be different connection speed for different type of content, depending on how much you are paying for the service and what "add-on package" you have bought. 

Lack of net neutrality, will also spell doom for innovation on the web. It is possible that ISPs will charge web companies to enable faster access to their websites. Those who don't pay may see that their websites will open slowly. This means bigger companies like Google will be able to pay more to make access to Youtube or Google+ faster for web users but a startup that wants to create a different and better video hosting site may not be able to do that. 

Will the concept of net neutrality survive?

Net neutrality is sort of gentlemen's agreement. It has survived so far because few people realized the potential of internet when it took off around 30 years ago. But now when the internet is an integral part of the society and incredibly important, ISPs across the world are trying to get the power to shape and control the traffic. But there are ways to keep net neutrality alive. 

Consumers should demand that ISPs continue their hands-off approach from the internet traffic. If consumers see a violation of net neutrality, they ought to take a proactive approach and register their displeasure with the ISP. They should also reward ISPs that uphold the net neutrality