Cliff Saran’s Enterprise blog


August 11, 2010  8:50 AM

Microsoft Patch Tuesday – 10th August 2010

Cliff Saran Profile: Cliff Saran
Application Compatibility, Applications, ChangeBase, Compatibility, Microsoft, patch, Windows

With this Microsoft Patch Tuesday update, we have the largest release of security and application updates that the ChangeBASE team has dealt with. Nine of the updates rate as ‘Critical’ and the remaining six updates are rated as ‘Important’ – a very significant release by Microsoft standards.


As we have seen in many other Microsoft Patch Tuesday releases, all of these patches will require a system restart for both workstation and server environments.

We have also included a brief snap-shot image of some of the sample results from the AOK Workbench with a single application and Patch Impact Assessment result for MS10-053, the IE browser security update;

MS10-053: Cumulative Security Update for Internet Explorer:

August Patch Tuesday Update - image1.JPG

Testing Summary

MS10-046

Vulnerability in Windows Shell Could Allow Remote Code Execution (2286198)

MS10-049

Vulnerabilities in SChannel Could Allow Remote Code Execution (980436)

MS10-051

Vulnerability in Microsoft XML Core Services Could Allow Remote Code Execution (2079403)

MS10-052

Vulnerability in Microsoft MPEG Layer-3 Codecs Could Allow Remote Code Execution (2115168)

MS10-053

Cumulative Security Update for Internet Explorer (2183461)

MS10-054

Vulnerabilities in SMB Server Could Allow Remote Code Execution (982214)

MS10-055

Vulnerability in Cinepak Codec Could Allow Remote Code Execution (982665)

MS10-056

Vulnerabilities in Microsoft Office Word Could Allow Remote Code Execution (2269638)

MS10-060

Vulnerabilities in the Microsoft .NET Common Language Runtime and in Microsoft Silverlight Could Allow Remote Code Execution (2265906)

MS10-047

Vulnerabilities in Windows Kernel Could Allow Elevation of Privilege (981852)

MS10-048

Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege (2160329)

MS10-050

Vulnerability in Windows Movie Maker Could Allow Remote Code Execution (981997)

MS10-057

Vulnerability in Microsoft Office Excel Could Allow Remote Code Execution (2269707)

MS10-058

Vulnerabilities in TCP/IP Could Allow Elevation of Privilege (978886)

MS10-059

Vulnerabilities in the Tracing Feature for Services Could Allow an Elevation of Privilege (982799)

August Patch Tuesday Update - image2.JPG
August Patch Tuesday Update - image3.JPG

Security Update: Detailed Summary

August Patch Tuesday Update - image4.JPG

August Patch Tuesday Update - image5.JPG

August Patch Tuesday Update - image6.JPG

August Patch Tuesday Update - image8.JPG

August Patch Tuesday Update - image9.JPG

August Patch Tuesday Update - image10.JPG

August Patch Tuesday Update - image11.JPG

August Patch Tuesday Update - image12.JPG
August Patch Tuesday Update - image13.JPG
*All results are based on an AOK Application Compatability Lab’s test portfolio of over 1,000 applications.

August 9, 2010  1:53 PM

August Patch Tuesday – a busy time for the deployment team

Cliff Saran Profile: Cliff Saran
Application Compatibility, Applications, ChangeBase, Compatibility, Deployment, Microsoft, patch, Security, Windows

Within the next 24 hours we’ll get to see the latest security patch update from Microsoft. Those who find these notifications of use will already know that this month, August, is going to be a big one. Looking at the list of 14 security patches that are due to be released tomorrow, there’s going to be a lot of companies that will need to do a lot of work if they are to ensure they maintain their stringent levels of security and avoid opening up the gates of vulnerability.

 

What’s interesting to me is the fact that eight out of the 14 patches are ranked as critical. This further highlights the need for companies to be extra vigilant during the holiday session. Once again this is an area which organisations can’t avoid or be complacent about. It will be all hands to the pump on Wednesday and we’ll be issuing the ChangeBASE AOK Patch report as usual to highlight any application compatibility issues that the updates might cause.


August 4, 2010  12:16 PM

EMC picks Greenplum

imurphy Profile: imurphy
Business Intelligence, cloud, DB2, EMC, Greenplum, IBM, Microsoft, Oracle, SAP, Sybase, Teradata, VMware

At the beginning of July, EMC announced its intention to acquire data warehouse vendor Greenplum. On the 29th July that deal was completed and EMC announced that Greenplum is the foundation of the new EMC Data Computing Product Division.

This  is another step down EMC’s path to becoming an all round player capable of competing with IBM, HP and Oracle. We now have four major players with large hardware and software divisions that are capable of providing most, if not all of the component for a corporate datacentre.

At the same time EMC has jumped from being a storage company to a large-scale data warehousing vendor. This means it is going to be competing with the likes of Teradata, Oracle, HP, Microsoft, IBM and the newly formed SAP/Sybase.

All of this fits into a much wider strategy that encompasses VMware, the EMC Backup and Recovery Solution division and its ever closer partnership with Cisco.

With VMware, EMC all but owns the virtualisation market which in turn gives it the keys to build Cloud infrastructure. The acquisition of Data Domain and a small number of other companies to create its BRS division meant it could now backup data into the Cloud as well as locally. The two together solve one major problem for companies looking to use the Cloud as a private/hybrid environment where they want to take advantage of Disaster Recovery and on-demand resources – how to synchronise data and switch control from local to Cloud.

Now, EMC sees an opportunity to take the largest consumer of storage and processing resources inside most large organisations into that same Cloud market. Business Intelligence (BI), data warehouses and data analytics are resource hungry processes. They need a lot of storage both for the raw data and for the refined sets of data that are used by people mining that data. On top of this, they consume a lot of processor power, memory and bandwidth.

EMC knows all of this, after all, it is the biggest supplier of storage systems. In the past it has been content to be the underlying hardware to Oracle, IBM, Microsoft, Teradata or any other database. But as the databases become larger there comes a point where they need to be tightly integrated with the hardware to get the maximum performance.

IBM has spent a lot of time with its very large customers integrating DB2 into its hardware platforms in order to extract maximum performance. One of the stated reasons for Oracle buying Sun was to get a hardware platform that it could tightly integrate with the database. The same is true of HP. As a result, it should come as no surprise then that EMC, as a hardware vendor, wants to do the same.

What make this especially interesting is that the Greenplum database is a massively parallel  processing (MPP) product. This means that it can take advantage of all the processor cores that you can throw at it. This is where VMware comes into play. It has all the required management tools to allocate resources on demand, including processor cores. Ally an MPP database to a resource management system and now you can ramp up and down on demand and this is where the competitors will need to respond quickly in order to counter this move.

We have already seen VMware tightly integrate hardware and software stacks with VSphere 4. They have a highly successful software solutions division where companies have ported their applications into VMs that can be used for evaluation and made live by the purchase of a key. On top of this they have used the EMC engineering division to build a hardware certification programme.

It is not a huge jump, therefore, to see the Greenplum database being distributed as a virtual machine ready for deployment. A step beyond that comes the EMC Database Appliance to compete with IBM and Oracle. Move sideways and the whole of the VMware management suite gets its own underlying database platform.

Don’t forget that a lot of the current VMware team are ex Microsoft and when we first saw the abstraction layers for VSphere 4 it looked an awful lot like Windows Server but with a proper certification programme. So will we see a version of Greenplum acting as the repository for directory services, authentication and management?

Don’t bet against it, because it is a natural step to take. In fact, given the need for service management and integrated help desk solutions, it would not be unreasonable to think of people inside EMC and VMware already planning their own CMDB solutions and other applications on top of the Greenplum platform.

Think SAP + Sybase + NetApp. SAP and NetApp are already working closely together along with a number of Cloud and hosting providers to provision hosted SAP. The acquisition of Sybase will allow SAP to have its own tightly bound database and given its NetApp relationship, it cannot have escaped the R&D team that tightly binding Sybase to the NetApp platform would make this a very effective solution.

For ISVs and anyone who wants to be a Cloud platform supplier EMC has a solution here as well. Once EMC integrates Greenplum effectively onto its hardware and adds in the multi-tenant management elements from VMware, you have a Cloud BI platform that can be delivered as part of a public and even a private Cloud product.

How about EMC taking the Software as a Service approach and creating Database as a Service and BI as a Service based on the Greenplum platform? We’ve already seen software development and software testing go down this route so taking the database and BI into the “as a Service” world is quite feasible.

These are just a few of the possible ways that we could see EMC go. The most important thing, however, is that this presages a whole new world for databases where they are no longer just an application but instead return to the core of the business as part of the hardware platform.


August 2, 2010  2:26 PM

Microsoft Releases Emergency Update Today

Cliff Saran Profile: Cliff Saran
Application Compatibility, Applications, ChangeBase, Microsoft, patch, Security, Windows

Later today, Microsoft will issue an emergency patch to fix a critical flaw in Windows that enables hackers to run code and take over PCs. Outlined on the Trusted Reviews site, there are several things that spring to mind.

This type of response from Microsoft is known as an OOB (Out OF Band) release and as such is an emergency release. Normal non-high risk patches are incorporated into the monthly Patch Tuesday report, which ChangeBASE analyses each month on this highly esteemed blog. But this one is gaining specific and immediate attention by Microsoft, requiring a rapid response from them, hence a quick testing turn around.

In light of the speed at which Microsoft is addressing the issue, our advice would be to test, and deploy as fast as possible. With this one, organisations can’t afford to sit back and see what happens – they need to act fast. Waiting for next week’s Patch Tuesday updates is not an option for this security issue.

And, here is the link to the original site;
http://www.trustedreviews.com/software/news/2010/08/02/Microsoft-to-Release-Emergency-Security-Patch/p1?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TRVNews+%28TrustedReviews+News+only+Feed%29

To receive updates from Microsoft like these in the future, feel free to sign-up for the Security Update Advance Notification service found here:
http://www.microsoft.com/technet/security/bulletin/notify.mspx.


July 27, 2010  11:08 AM

IBM responds to EU mainframe investigation

Cliff Saran Profile: Cliff Saran
anti-trust, EU, IBM, Mainframe, zEnterprise

IBM has issued a statement following the extension of the EU competitions authority investigation into its mainframe business.

 

Here is IBM’s response to the investigation:

 

Not long ago, numerous high tech pundits and IBM competitors declared the mainframe server dead — a dinosaur, extinct. At that time, many companies were abandoning mainframe servers, and others were developing distributed alternatives to centralized computing. In the face of these predictions and fundamental marketplace changes, IBM made a critical decision: to invest billions of dollars in its mainframe technology to bring unprecedented levels of speed, reliability and security to the enterprise market. These investments reinvigorated the mainframe server as a vital competitor in a highly dynamic marketplace.

Today’s computer server market is clearly dominated by Intel-based servers from HP, Dell, Oracle and many others, as well as Unix servers. The numbers speak for themselves: mainframe server sales today are a tiny fraction of worldwide servers — representing just 0.02% of servers shipped and less than 10% of total server revenues in 2009, according to IT industry analyst firm IDC — and shrinking from 2008. That is in sharp contrast to Intel-based servers, which represented more than 96% of all server shipments and nearly 55% of total server revenues in 2009, according to IDC. The first quarter 2010 server share reports from IDC and Gartner show this trend continuing, with mainframe server revenue declining and Intel-based servers growing. Today, the mainframe server is a small niche in the overall, highly-competitive server landscape, but it remains a source of great value for those IBM clients who value its high levels of security and reliability. Yet even with all of its substantial innovations, the migration of certain customers and workloads away from mainframe servers to other systems remains common.


Certain IBM competitors which have been unable to win in the marketplace through investments in fundamental innovations now want regulators to create for them a market position that they have not earned. The accusations made against IBM by TurboHercules and T3 are being driven by some of IBM’s largest competitors — led by Microsoft — who want to further cement the dominance of Wintel servers by attempting to mimic aspects of IBM mainframes without making the substantial investments IBM has made and continues to make. In doing so, they are violating IBM’s intellectual property rights.


IBM intends to cooperate fully with any inquiries from the European Union. But let there be no confusion whatsoever: there is no merit to the claims being made by Microsoft and its satellite proxies. IBM is fully entitled to enforce its intellectual property rights and protect the investments we have made in our technologies. Competition and intellectual property laws are complementary and designed to promote competition and innovation, and IBM fully supports these policies. But IBM will not allow the fruits of its innovation and investment to be pirated by its competition through baseless allegations.

Are these really baseless allegations? The new zEnterprise hardware certainly looks impressive. But the value of mainframe computing is not just in the hardware, it is about the whole ecosystem?


July 22, 2010  8:11 AM

Show me the proof

imurphy Profile: imurphy
BI, OpenOffice, Spreadsheets

In an age of increasing computerisation there are times when we need to actually question what the computer is telling us. Take spreadsheets for example. It is not uncommon to discover that the built-in functions can behave differently to the underlying macro and programming languages.

An example of this is the simple rounding problem. The built-in function may round up on a 5 while the equivalent function in code may round down. Use a mix and they may balance out and they may not. You also have to be aware of the number of decimal places that you are working with in the underlying data and then how many places you display on screen.

In a complex spreadsheet you can find many examples of rounding but in a BI environment where you are simply importing the data into a spreadsheet you may have, literally, hundreds of small errors. The rounding error is something that you can fix through settings but strangely enough, few people do. Much more serious is the complex functions that few users want to understand beyond the simple entry of numbers and the delivery of a result.

So why isn’t this constantly raised as an issue? Actually, it used to be. Visicalc, SuperCalc, Lotus, Smart, Excel, PlanPerfect, Quattro Pro, OpenOffice – all of these products have had and still have this issue. But, over time, as vendors have refused to acknowledge or fix it, people have given up trying to get a solution.

That solution would require any spreadsheet vendor to send their function library to an independent source to have it properly validated. This has been suggested but every vendor has resisted it because, ironically, the last thing that they want is to be told they are doing things in the wrong way.

The impact on a database world might seem negligible but it isn’t. User Defined Fields in some classes of databases use functions to calculate values based on other fields. The more complex the underlying data, the less chance of users actually examining that data to check for errors.

In financial and scientific fields these function errors can be serious. For example, if you are doing computational fluid dynamics (CFD) for a Formula 1 racing car and you have a built in function that is calculating a key value wrongly, your car could be slower than the competition. It doesn’t take much performance degradation to cost you places and waste large amounts of money.

In aircraft, rocket or missile technology, a mistake could take a long time to be spotted given the underlying complexity of the work. For those in financial market, there are often a lot of other systems that may provide the checks and balances but those systems are becoming ever more tightly integrated which means that data may be a calculated field in one system but entered as a value in the next. Now the problem cannot be traced to its source and has assumed a more serious proportion.

There are instances, for example, where such errors are unlikely to have much impact at all on the decisions made. For example, in a retail environment when looking at the sales and stocking levels of goods. However, if you were looking at gross margin, a product could move from marginal to drop.

BI products are aimed at allowing users to extract data from these underlying sources, do a range of manipulations on that data and then use the results to make business decisions. In some cases, that data is returned to the databases and used by other users and teams as source data for their work. As in the financial market example, the error is now compounded.

Recently, I spoke with Aster Data, a vendor of Massively Parallel Processing database solutions and we talked about a recent press release in which they announced that they had added over 1,000 MapReduce-ready functions to their data analytics suite. Such a massive increase in the number of functions without any third-party validation has to be of concern.

Aster Data say that they have not had any requests from customers to validate their functions or prepackaged applications. They do see a lot of requests from customers for more functions and applications and believe that customers are already starting to build their own use cases to validate data.

While Aster Data customers might be doing their own checks there is no widespread evidence that this is a common practice among users of spreadsheets, BI tools or databases. With the currently explosion in end-user BI, it’s time for software vendors to up their game and prove that their functions are consistent, accurate and fit for purpose.

 

Archived comments:

———Giles Thomas | July 23, 2010 7:15 PM

This is an interesting post; spreadsheet integrity is an increasingly critical issue, especially in the financial markets. Traders, for example, use spreadsheets to create pricing models to help them quickly decide on what positions to take, and when to take them. The spreadsheets are then routinely shared, cut and pasted and adapted across trading desks or even whole trading floors. As a result, they can easily become cumbersome, botched-together “Frankensheets”, and contain inherently difficult-to-spot inaccuracies.

While working at a large investment bank, I learned just how widely-used spreadsheets really are, and how frequently they end up being a source of frustration (or financial loss) to their users. It after several years of this that I went on to build my own spreadsheet, Resolver One. In it, the formulae you put into the code are compiled down into code in the Python programming language before being executed, while the equivalent of macro code is already in Python — so the built-in functions are identical in both, so (obviously) can’t behave differently.

A separate problem with spreadsheet accuracy — one you don’t touch on, perhaps because it doesn’t happen so much in databases, but which I’ve seen in the work of spreadsheet users who’ve reached the dangerous stage where they have started writing macros but don’t have much experience in they yet — is the difference between the “functional” model on the grid, where you can (in theory) recompute the cells in any order that respects their mutual dependencies, and even skip cells whose dependencies haven’t been changed since the last recalculation, and the “imperative” model in the macro and user-defined function (UDF) language, where a function can change global state and have side-effects. Because you can write a UDF that returns a different value each time it’s called, based on global state, it’s easy for an inexperienced macro developer to write UDFs that make the spreadsheets that use them generate inconsistent results.

———Marc | October 7, 2010 2:49 PM

I would agree 100% with Giles comments, that users writing their own macro’s can get themselves into hot water.

I can’t see the connecton between the author’s dicussion about producing sound macro’s in excel, and his conclusion that Aster Data systems would (may ?) suffer from the same issues.

Maybe this article is just a rehack of the old garbage in, garbage out adage.

MapReduce is used by google to index the world wide web, is that not some proof?


July 19, 2010  12:59 PM

TIBCO Silver Spotfire pubishes BI to the Cloud

imurphy Profile: imurphy
BI, cloud

From the smallest home office business to the largest enterprise, the amount of data that businesses accumulate continues to grow. Using that information effectively is often challenging because users do not possess the tools or the knowledge on how to make the most of their data.

Small and even mid-sized enterprises often lack the resources to acquire BI tools, skills and training when compared to larger enterprises. This puts them at a disadvantage when it comes to competing and, more importantly, gaining a better insight to their business and market.

According to a recent press release TIBCO Silver Spotfire is targeted at the SME as “a fully functional on-demand offering designed to enable anyone to create,publish and share custom dashboards or reports for business analytics and business intelligence (BI) in the Cloud.”

For the first year, those companies who want to try this can get free access to TIBCO Silver Spotfire. It comes with an authoring client and expansive web-based sharing and hosting for the user’s favourite personal or business Spotfire application. After the year is up, TIBCO says that there will be a range of monthly hosting options for those who want to continue using the product.

TIBCO also describes this as not just BI as Software as a Service (SaaS) but as “social BI”. The idea is that individuals can quickly create and share information across the business as part of an ad-hoc corporate analytics knowledge base. Any data created through TIBCO Silver Spotfire can also be integrated into a range of social media, blogs and online articles.

At the heart of all of this is the TIBCO Silver Cloud Computing platform on which the company has now made provision for Spotfire users which was updated in May 2010 as a hosted platform for TIBCO customers along with the Silver Spotfire beta.

A free one year subscription will be attractive to many customers. However, it is important to note that this is not the full Spotfire Enterprise product that TIBCO sells but a reduced functionality product. Customers who want to move to the full Enterprise version later will be able to do so and, at the same time, pull in the work that they have published on Silver Spotfire.

TIBCO is very clear about the target audience here. This is about extending the reach of BI into small companies, small branch office, departments who need a simple BI tool and where TIBCO currently has no presence. By using a free hosted Cloud based platform with customers having no initial costs, TIBCO believes that many companies will be tempted to try BI for the first time.

As a software developer tools vendors, TIBCO is also hoping to build a community of developers who want to build dashboards and other applications on top of the TIBCO Silver Cloud Computing platform and in particular on Silver Spotfire. This would allow TIBCO to attract an every increasing set of customers and make the Silver platform more attractive to third party Cloud hosting vendors who are looking for a value-add solution in order to attract more business.

The users work a local copy of the Spotfire client to create their data visualisation and then upload the data to the server . The maximum single file size is 10MB of compressed data. That might not sound a lot but TIBCO believes that it is more than enough for 300,000-500,000 rows of data depending on the level of data redundancy and the type of visualisation used.

This is a little disappointing but signals where TIBCO currently is with this product. While a fully fledged Cloud BI platform would have been nice, this is about hosting your results not hosting your BI. What will be interesting over the next year is if TIBCO can not only build the complete BI Cloud platform but then sell that as part of the Silver Cloud Computing platform to third party vendors. Success in this area would be a significant market changer but would also need to be linked to a host of other components such as virtual machines, a fully fledged SaaS platform and an active developer community.

As the work is all done locally, when the data is published to the Silver Spotfire platform, the files are not linked to the underlying data sources. This is important as it means that the files are not going to auto update as the core data changes and users will need to build their own local processes to recreate and republish.

In this first release the data will be hosted in the US and there is no Geo Locking. This means that you need to carefully control any data published through the platform to ensure that you do not inadvertently breach any data protection rules. With one of the goals of Silver Spotfire to make it easier to use social media for publishing data there is also a real risk to data leakage

Stopping this is more challenging than many companies realise so it is important that companies step up their data management training for users. This does not preclude using Silver Spotfire but it is something that must be taken into account especially as there is no guidance on data protection policies on the TIBCO website

Another missing element here is federated security. This is something that TIBCO has said it will be working on over the next year as it builds momentum with Silver Spotfire. At present, it is talking to the early adopters and will talk to any new customers about what they want in terms of security.

Despite the security and data protection concerns this looks like a very interesting opportunity and one that is well worth spending some time investigating.


July 16, 2010  12:32 PM

IBM assimilates the competition

imurphy Profile: imurphy
Databases, DeveloperWorks, IBM, Microsoft, Migration, Oracle, Porting, Sybase

Porting a database from one vendors offering to another has always been difficult. To try and ease the pain vendors have product porting guides, third party tools companies have products that will take your schemas and stored procedures and recreate them for the new target database and software testing companies have products that will allow you to create a series of acceptance tests against the newly ported data.

Despite all of this, the way we embed code inside applications today means that we often don’t find the problems until it is too late and the help desk starts taking calls. One of the main reasons that embedded code causes us difficulty has been the divergence of SQL from a single standard into three main variants with a fourth – parallel SQL – starting to make its mark as databases become ever larger and queries more complex.

User Defined Fields are another challenge. They are often used to create a complex field type that the developer didn’t want to break down into multiple fields for some particular reason associated to their code. It may also have been used to hold an unsupported data type from another database during a previous port.

Challenges go far beyond field and SQL constructs. Big database vendors are designing in features to support their own developer tools and key data driven applications. These end up as elements inside the database which often have no correlation in another vendors products.

But no matter how many challenges you identify, people still want to port their databases. It may be financial, it may be that the new DBA or IT manager has a preference for a different vendor or it may be that someone outside of IT has decided that we should now be moving all our tools over to a new supplier.

IBM has decided that it is time to change the landscape. Alongside its existing migration documents and professional services engagements, IBM is now allowing developers to run native code from both Oracle and Sybase against DB2 9.7.

All of this comes at a time when Oracle is still digesting Sun and Sybase is being bought by SAP. By allowing native code to be run IBM believes that those customers who are unsure about what the future holds for Oracle and Sybase can quickly move to DB2 without having to cost in the rewrite of thousands of lines of application code.

And come those customers have. IBM is claiming that is has been able to take banking customers from Sybase who were worried about systems optimisation. At the same time, they have picked up over 100 SAP implementations that were either looking at or already deployed on Oracle.

There is another reason for IBM to make itself the universal database target. While its competitors spend large sums of money buying application solutions and then rewriting them to run on their databases, IBM is able to simply focus on the underlying database technology. This allows it to focus its R&D on database performance and optimisation and then consume any application tier.

These are not the only two databases that IBM is targeting. It has both MySQL and Microsoft SQL Server in its sights although, at present, those are still part of a migration rather than a native code solution and IBM is unable to say when there will be native code solutions. For Microsoft SQL Server, this should be relatively simple as the T-SQL it uses has not diverged much from the Sybase T-SQL from which it was derived.

To help developers understand more about this there are a number of very interesting articles up on the IBM DeveloperWorks web site.

 

http://www.ibm.com/developerworks/data/library/techarticle/dm-0907oracleappsondb2/

http://www.redbooks.ibm.com/abstracts/sg247736.html?Open

http://www.ibm.com/developerworks/wikis/display/DB2/Chat+with+the+Lab

http://www.ibm.com/developerworks/data/downloads/migration/mtk/

http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/

 


July 16, 2010  10:08 AM

The problem with Apple

Cliff Saran Profile: Cliff Saran
Apple, fault, ipad, iPhone, left-hand, recall

The media seems to love the iPhone and iPad. TV gadget shows, celebrities and  journalists feed the hype over a new Apple product. An iPhone or iPad launch is a big event, which drives more and more people to  buy products on Day One of launch, before anyone has even reviewed the product.

 

This means that products are not properly beta tested, such as the left-handed problem on the iPhone 4. It is entirely Apple’s fault – for not running extensive quality assurance and product tests. If Apple is such a great brand, the it should offer customers the very highest quality products. Unfortunately, this is not hw the Apple marketing machine currently works. Let’s hope that today its execs have to admit they were wrong, and agree to recall millions of products to fix the ridiculous iPhone 4 problem, which could have been spotted by any beta tester.


July 14, 2010  6:00 PM

Terradata and ESRI combine to map data

imurphy Profile: imurphy
ESRI, Terradata

At the ESRI User Conference in San Diego, CA this week, Terradata and ESRI announced a new collaboration aimed at storing business data and Geographical Information System data in the same database. The goal is to enable businesses to better understand where there customers are in order to do more focused marketing.

This use of GIS and BI data is nothing new with an increasing number of marketing departments building their own solutions over the last 20 years. What is new is that all the data from both systems is being stored in the same database. The advantage for users is that rather than have to build complex queries against multiple data sources, they can write simpler, faster queries against a single database.

There are other advantages here for users. With both sets of data inside the same database, new data can be automatically matched with GIS information as it is entered. For any retailer doing overnight updates of their store data into a central BI system this provides them with the ability to create “next day” marketing campaigns aimed at individual stores. For large retailers, such as supermarkets, this is going to be highly attractive.

As well as retailers, Terradata and ESRI are targeting a number of other vertical markets such as telecommunications, utilities, transportation and government departments. In all these cases, being able to map usage and consumption to GIS data will mean the ability to deliver better services as need arises.

One area that will benefit highly is emergency response to situations where the mapping element of the GIS data will enable response teams in the field to immediately match population location with access routes. It will also provide them with the ability to create safe zones based on local geography without the problem of trying to match conditions on the ground with remote operations staff.

All of this marks a switch away from the integration plans of IBM, Oracle, Microsoft and others who believe that the future is in being able to access multiple data sources at query time and it will be interesting to see how long it takes for the others to follow Terradata’s example. One company who could move quickly down this route is Microsoft by using its Bing Maps data but they currently have no plans to do so.

Archived comments:

Matt | July 16, 2010 5:32 PM

“What is new is that all the data from both systems is being stored in the same database. The advantage for users is that rather than have to build complex queries against multiple data sources, they can write simpler, faster queries against a single database.”

How is this new? It has been possible to do this for years with Oracle/Spatial or PostgreSQL/PostGIS. The main stumbling block for ESRI users has been ESRI’s proprietary data structures and expensive middle tier server technology and its historically poor support for truly enterprise class RDBMS.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: