Cliff Saran’s Enterprise blog


July 19, 2010  12:59 PM

TIBCO Silver Spotfire pubishes BI to the Cloud

imurphy Profile: imurphy
BI, cloud

From the smallest home office business to the largest enterprise, the amount of data that businesses accumulate continues to grow. Using that information effectively is often challenging because users do not possess the tools or the knowledge on how to make the most of their data.

Small and even mid-sized enterprises often lack the resources to acquire BI tools, skills and training when compared to larger enterprises. This puts them at a disadvantage when it comes to competing and, more importantly, gaining a better insight to their business and market.

According to a recent press release TIBCO Silver Spotfire is targeted at the SME as “a fully functional on-demand offering designed to enable anyone to create,publish and share custom dashboards or reports for business analytics and business intelligence (BI) in the Cloud.”

For the first year, those companies who want to try this can get free access to TIBCO Silver Spotfire. It comes with an authoring client and expansive web-based sharing and hosting for the user’s favourite personal or business Spotfire application. After the year is up, TIBCO says that there will be a range of monthly hosting options for those who want to continue using the product.

TIBCO also describes this as not just BI as Software as a Service (SaaS) but as “social BI”. The idea is that individuals can quickly create and share information across the business as part of an ad-hoc corporate analytics knowledge base. Any data created through TIBCO Silver Spotfire can also be integrated into a range of social media, blogs and online articles.

At the heart of all of this is the TIBCO Silver Cloud Computing platform on which the company has now made provision for Spotfire users which was updated in May 2010 as a hosted platform for TIBCO customers along with the Silver Spotfire beta.

A free one year subscription will be attractive to many customers. However, it is important to note that this is not the full Spotfire Enterprise product that TIBCO sells but a reduced functionality product. Customers who want to move to the full Enterprise version later will be able to do so and, at the same time, pull in the work that they have published on Silver Spotfire.

TIBCO is very clear about the target audience here. This is about extending the reach of BI into small companies, small branch office, departments who need a simple BI tool and where TIBCO currently has no presence. By using a free hosted Cloud based platform with customers having no initial costs, TIBCO believes that many companies will be tempted to try BI for the first time.

As a software developer tools vendors, TIBCO is also hoping to build a community of developers who want to build dashboards and other applications on top of the TIBCO Silver Cloud Computing platform and in particular on Silver Spotfire. This would allow TIBCO to attract an every increasing set of customers and make the Silver platform more attractive to third party Cloud hosting vendors who are looking for a value-add solution in order to attract more business.

The users work a local copy of the Spotfire client to create their data visualisation and then upload the data to the server . The maximum single file size is 10MB of compressed data. That might not sound a lot but TIBCO believes that it is more than enough for 300,000-500,000 rows of data depending on the level of data redundancy and the type of visualisation used.

This is a little disappointing but signals where TIBCO currently is with this product. While a fully fledged Cloud BI platform would have been nice, this is about hosting your results not hosting your BI. What will be interesting over the next year is if TIBCO can not only build the complete BI Cloud platform but then sell that as part of the Silver Cloud Computing platform to third party vendors. Success in this area would be a significant market changer but would also need to be linked to a host of other components such as virtual machines, a fully fledged SaaS platform and an active developer community.

As the work is all done locally, when the data is published to the Silver Spotfire platform, the files are not linked to the underlying data sources. This is important as it means that the files are not going to auto update as the core data changes and users will need to build their own local processes to recreate and republish.

In this first release the data will be hosted in the US and there is no Geo Locking. This means that you need to carefully control any data published through the platform to ensure that you do not inadvertently breach any data protection rules. With one of the goals of Silver Spotfire to make it easier to use social media for publishing data there is also a real risk to data leakage

Stopping this is more challenging than many companies realise so it is important that companies step up their data management training for users. This does not preclude using Silver Spotfire but it is something that must be taken into account especially as there is no guidance on data protection policies on the TIBCO website

Another missing element here is federated security. This is something that TIBCO has said it will be working on over the next year as it builds momentum with Silver Spotfire. At present, it is talking to the early adopters and will talk to any new customers about what they want in terms of security.

Despite the security and data protection concerns this looks like a very interesting opportunity and one that is well worth spending some time investigating.

July 16, 2010  12:32 PM

IBM assimilates the competition

imurphy Profile: imurphy
Databases, DeveloperWorks, IBM, Microsoft, Migration, Oracle, Porting, Sybase

Porting a database from one vendors offering to another has always been difficult. To try and ease the pain vendors have product porting guides, third party tools companies have products that will take your schemas and stored procedures and recreate them for the new target database and software testing companies have products that will allow you to create a series of acceptance tests against the newly ported data.

Despite all of this, the way we embed code inside applications today means that we often don’t find the problems until it is too late and the help desk starts taking calls. One of the main reasons that embedded code causes us difficulty has been the divergence of SQL from a single standard into three main variants with a fourth – parallel SQL – starting to make its mark as databases become ever larger and queries more complex.

User Defined Fields are another challenge. They are often used to create a complex field type that the developer didn’t want to break down into multiple fields for some particular reason associated to their code. It may also have been used to hold an unsupported data type from another database during a previous port.

Challenges go far beyond field and SQL constructs. Big database vendors are designing in features to support their own developer tools and key data driven applications. These end up as elements inside the database which often have no correlation in another vendors products.

But no matter how many challenges you identify, people still want to port their databases. It may be financial, it may be that the new DBA or IT manager has a preference for a different vendor or it may be that someone outside of IT has decided that we should now be moving all our tools over to a new supplier.

IBM has decided that it is time to change the landscape. Alongside its existing migration documents and professional services engagements, IBM is now allowing developers to run native code from both Oracle and Sybase against DB2 9.7.

All of this comes at a time when Oracle is still digesting Sun and Sybase is being bought by SAP. By allowing native code to be run IBM believes that those customers who are unsure about what the future holds for Oracle and Sybase can quickly move to DB2 without having to cost in the rewrite of thousands of lines of application code.

And come those customers have. IBM is claiming that is has been able to take banking customers from Sybase who were worried about systems optimisation. At the same time, they have picked up over 100 SAP implementations that were either looking at or already deployed on Oracle.

There is another reason for IBM to make itself the universal database target. While its competitors spend large sums of money buying application solutions and then rewriting them to run on their databases, IBM is able to simply focus on the underlying database technology. This allows it to focus its R&D on database performance and optimisation and then consume any application tier.

These are not the only two databases that IBM is targeting. It has both MySQL and Microsoft SQL Server in its sights although, at present, those are still part of a migration rather than a native code solution and IBM is unable to say when there will be native code solutions. For Microsoft SQL Server, this should be relatively simple as the T-SQL it uses has not diverged much from the Sybase T-SQL from which it was derived.

To help developers understand more about this there are a number of very interesting articles up on the IBM DeveloperWorks web site.

 

http://www.ibm.com/developerworks/data/library/techarticle/dm-0907oracleappsondb2/

http://www.redbooks.ibm.com/abstracts/sg247736.html?Open

http://www.ibm.com/developerworks/wikis/display/DB2/Chat+with+the+Lab

http://www.ibm.com/developerworks/data/downloads/migration/mtk/

http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/

 


July 16, 2010  10:08 AM

The problem with Apple

Cliff Saran Profile: Cliff Saran
Apple, fault, ipad, iPhone, left-hand, recall

The media seems to love the iPhone and iPad. TV gadget shows, celebrities and  journalists feed the hype over a new Apple product. An iPhone or iPad launch is a big event, which drives more and more people to  buy products on Day One of launch, before anyone has even reviewed the product.

 

This means that products are not properly beta tested, such as the left-handed problem on the iPhone 4. It is entirely Apple’s fault – for not running extensive quality assurance and product tests. If Apple is such a great brand, the it should offer customers the very highest quality products. Unfortunately, this is not hw the Apple marketing machine currently works. Let’s hope that today its execs have to admit they were wrong, and agree to recall millions of products to fix the ridiculous iPhone 4 problem, which could have been spotted by any beta tester.


July 14, 2010  6:00 PM

Terradata and ESRI combine to map data

imurphy Profile: imurphy
ESRI, Terradata

At the ESRI User Conference in San Diego, CA this week, Terradata and ESRI announced a new collaboration aimed at storing business data and Geographical Information System data in the same database. The goal is to enable businesses to better understand where there customers are in order to do more focused marketing.

This use of GIS and BI data is nothing new with an increasing number of marketing departments building their own solutions over the last 20 years. What is new is that all the data from both systems is being stored in the same database. The advantage for users is that rather than have to build complex queries against multiple data sources, they can write simpler, faster queries against a single database.

There are other advantages here for users. With both sets of data inside the same database, new data can be automatically matched with GIS information as it is entered. For any retailer doing overnight updates of their store data into a central BI system this provides them with the ability to create “next day” marketing campaigns aimed at individual stores. For large retailers, such as supermarkets, this is going to be highly attractive.

As well as retailers, Terradata and ESRI are targeting a number of other vertical markets such as telecommunications, utilities, transportation and government departments. In all these cases, being able to map usage and consumption to GIS data will mean the ability to deliver better services as need arises.

One area that will benefit highly is emergency response to situations where the mapping element of the GIS data will enable response teams in the field to immediately match population location with access routes. It will also provide them with the ability to create safe zones based on local geography without the problem of trying to match conditions on the ground with remote operations staff.

All of this marks a switch away from the integration plans of IBM, Oracle, Microsoft and others who believe that the future is in being able to access multiple data sources at query time and it will be interesting to see how long it takes for the others to follow Terradata’s example. One company who could move quickly down this route is Microsoft by using its Bing Maps data but they currently have no plans to do so.

Archived comments:

Matt | July 16, 2010 5:32 PM

“What is new is that all the data from both systems is being stored in the same database. The advantage for users is that rather than have to build complex queries against multiple data sources, they can write simpler, faster queries against a single database.”

How is this new? It has been possible to do this for years with Oracle/Spatial or PostgreSQL/PostGIS. The main stumbling block for ESRI users has been ESRI’s proprietary data structures and expensive middle tier server technology and its historically poor support for truly enterprise class RDBMS.


July 14, 2010  8:49 AM

Microsoft Patch Tuesday: 13th July 2010

Cliff Saran Profile: Cliff Saran
Application Compatibility, Applications, ChangeBase, Microsoft, office, Testing, Windows, Windows 7

With this July Microsoft Patch Tuesday Security Update, we see a moderate number of security updates with 4 updates to Windows XP, Windows 7 and Office, including three updates rated as ‘Critical’ and one rated as ‘Important’. Unfortunately, all patched released this month will most likely require a reboot of the target system. In addition, all of these Microsoft Security Updates relate to Remote Code Execution vulnerabilities.

The ChangeBase AOK Patch Impact team has updated the sample application database to now more than 2000 unique application packages. All of the applications in this large sample application portfolio are analysed for application level conflicts with Microsoft Security Updates and potential dependencies.

Based on the results of our AOK Application Compatibility Lab, only one of the July Patch Tuesday updates is likely to require significant application level testing;

·         MS10-044 Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege

We have included a brief snap-shot of some of the results from our AOK Software that demonstrates some of the potential impacts on the OSP application package with the following image:

Patch Tuesday1.JPG

In addition to this high level summary, we have also included a small sample of one of the AOK Summary reports from a smaller sample database;

Patch Tuesday2.JPG

Microsoft Patch Tuesday Update Testing Summary

MS10-042 Cumulative Security Update of ActiveX Kill Bits

MS10-043 Cumulative Security Update for Internet Explorer

MS10-044 Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege

MS10-045 Vulnerabilities in Microsoft SharePoint Could Allow Elevation of Privilege 

Patch Tuesday3.JPG

Security Update Detailed Summary

Patch Tuesday4.JPG

Patch Tuesday5.JPG

Patch Tuesday6.JPG 


July 14, 2010  8:41 AM

Oracle announces BI 11g

imurphy Profile: imurphy
Business Intelligence, Oracle

Last week, Oracle announced the availability of Oracle Business Intelligence 11g. This is a major release for Oracle and comes at a time when Microsoft and SAP have both made major announcements of their own.

The emphasis at the launch was on Integration with Charles Phillips, President, Oracle stressing the depth of the Oracle stack from storage to applications and all points in between. Much of the stack has come from the Sun acquisition and it is too soon to be sure that the emphasis on integration that Phillips kept stressing is really there.

Phillips was keen to point out that it was not just the ability to present an integrated stack that set Oracle apart from the competition but its focus on standards. Phillips told attendees at the launch that Oracle “supports standards, helps define standards and is standards driven”. One of the challenges for Phillips here is that Oracle has been particularly coy on what is happening with the whole Java standards process.

The key message for Oracle BI 11g alongside integration was that as the industry moves forward we will begin to see BI embedded in all our processes. This makes a lot of sense. To many people, BI is still all about sales and competitive edge. Yet some companies are already looking at BI tools to see what else it can provide such as a better understanding of IT management in complex environments or the performance of software development. At present, however, Oracle has no stated intention of addressing either of these markets.

There is little doubt that Oracle is keen to counter the messaging from Microsoft about BI everywhere. In the launch, there was a ken focus on the end user experience and the ability to not only access BI from any device but to ensure consistency of what you were working with.

This is important. One of the criticisms of the Microsoft TechEd announcements was the loss of synchronisation and control of data as users saved into SharePoint and created a lot of disconnected data. Oracle is keen to ensure that it can keep control of the data and there was a significant focus on security and data management.

Yet despite all this talk of access from everywhere, Oracle has decided against embedded BI tooling inside Oracle Open Office and instead opted for seamless integration with Microsoft Office. This has to be a mistake and those who believed that Oracle has no interest in Open Office will see this as the smoking gun they have been waiting for.

If Oracle really wants to see Open Office and the enterprise version become a significant competitor to Microsoft’s Office then embedding the BI tools is a requirement. This will also make it easier for developers to create applications that are part of the daily toolset used by end users and make BI just another desktop function.

Phillips presentation was just the warm up. It was left to Thomas Kurian, Executive Vice President, Oracle to make the full technical presentation of Oracle Business Intelligence 11g.

Kurian wasted no time in presenting the Common Business Intelligence Foundation upon which the Common Enterprise Information Model is built. This is designed to manage all of the integration between applications/devices and the data sources.

There were several features that stood out for me. The first is the reporting and Oracle BI Publisher. Lightweight, able to access multiple data source formats and output into a wide range of file formats it is also scalable. What was missing was any announcement that Oracle was going to release it as a BI appliance. This would be a game changer, especially if those appliances could be deployed to remote offices.

The second key feature is the integration with WebCenter Workspaces allowing users to collaborate on BI reports. What isn’t fully clear yet is whether this has the same potential for data explosion as Microsoft allowing users to save to SharePoint.

The third feature is the Oracle BI Action Framework which can be manual or automated and will appeal to developers looking to build complex applications. It will tie into the existing Oracle Middleware and uses alerts to detect changes to key data. Users can not only ensure that they are working with the latest data but developers can use those alerts to call web services and trigger workflows.

Security is a major problem with BI. As data is extracted from multiple sources and stored locally, it is possible to lose data and end up with unauthorised access to data. The BI 11g Security model uses watermarking of reports as well as encryption along with role-based security. One potential issue here will be how that encryption will be deployed throughout the business chain when you are distributing data to suppliers and customers.

Finally, not only is Oracle looking to provide a range of packaged BI applications within the BI Enterprise Edition but it is ensuring that the BI capability is embedded inside its existing applications. This two pronged approach should mean a tighter link between the BI tools and the applications.

This was a big announcement from Oracle and one that requires some digestion. There are missing elements such the Oracle Open Office integration and the failure to announce a BI appliance. However, it is clear that Oracle is determined to stamp its control on the BI market and as it continues to integrate Sun into the product strategy, we can expect to see complete end to end solutions in the coming months.

You can watch the keynotes of both Phillips and Kurian as well as download their presentations at: http://www.oracle.com/oms/businessintelligence11g/webcast-075573.html


July 8, 2010  6:44 PM

SAP and Sybase – who gains?

imurphy Profile: imurphy
SAP, Sybase

When SAP announced its intention to acquire Sybase in May 2010, it immediately raised a number of questions. Seven weeks on and neither side seems particularly interested in publicly talking about the rationale behind this acquisition.

At first glance, this appears to be a smart move by SAP and a business saver for Sybase. 

A decade ago, ERP and CRM applications were seen as only relevant for large enterprises. Today, with the explosion of hosted services, even the smallest of companies can buy access to such software. This means that vendors need to be quick to respond and be able to support a much wider spread of customers.

SAP has led this market for a number of years but the acquisition of Siebel by Oracle, the consolidation by Microsoft of its Dynamics division and the success of Salesforce.com have started to make inroads into the business. SAP has not been idle. It has built a strong developer community and has established hosting deals with a number of companies such as T-Systems who host over 1.5m SAP seats.

Despite all of this, SAP has one part of the cycle that it does not own and its competitors do – the underlying database. The ability for customers and developers to tune their applications for maximum performance is critical and the best way to do this is to own all the components.

This presents SAP with a real challenge. It has done very well out of IBM, Microsoft and Oracle, all of whom have invested significant sums of money in building consultancies capable of tuning SAP on their database products. Oracle recently set a new benchmark for SAP performance running on top of its own database products so it might seem that there is little need for SAP to buy its own database product.

Sybase has been the fourth largest database vendor for some time now but the last two decades have not always been kind to it. In the 1990s, not only did it rival Microsoft with its Rapid Application Development tools but at various points was seen as the market leader. When the RAD tools market took a dive, Sybase was hit very hard and has really struggled to reinvent and reposition PowerBuilder as a mainstream development tool.

The well publicised split between it and Microsoft that left Microsoft with SQL Server did allow Sybase to concentrate on the Enterprise market while Microsoft built a product that could compete with it. While no longer being a significant player in the general database space, Sybase does have a serious position in the high-end database market, mobile services and low-end portable databases. Sybase also has its own Business Intelligence and Analytics tools.

All of these appeal to SAP. They can use the high-end database product which includes in-memory and cloud versions to extend their hosted platform offerings. The mobile services platform means that they can position themselves into the operator and payments arena. Finally, the low-end portable database market means that their development community can build applications for mobile workforces where data can be collected on devices such as Smartphones, PDAs and laptops that will synchronise easily into the enterprise solutions.

Taken together, this would appear to give SAP a complete set of offerings and enable it to compete with those competitors who have a complete stack from developer tools, through ERP/CRM and database.

However, there are issues that need to be resolved. The first is that the large percentage of SAP sales come through the professional services teams at IBM, Oracle and even HP. By having its own database and tools, it will need to prove that it is not intending to abandon customers using other databases.

While the Sybase tooling looks good it is still far from perfect. Building tools for a wide range of mobile devices is not easy and the current tools are very Microsoft focused. Despite talking Rich Internet Applications for several years, Sybase has failed to deliver any serious RIA tooling.

The BI tools are not widely used and SAP is going to have to make decisions as to how to integrate them with Crystal Reports to create a single powerful end to end reporting and analytics engine.

So, is this a wise move? Provided SAP is prepared to drive Sybase and not allow it to just operate as a fully autonomous business unit, this makes sense. But if it treats Sybase in the same way as EMC did VMware for several years, any benefits will be slow in maturing.


June 30, 2010  10:26 AM

IBM believes in commoditised HPC for BI

imurphy Profile: imurphy
BI, IBM

BI needs power. Lots of power. Before multi-core processors and desktop BI arrived, companies doing serious BI either used mainframes, mini-computers or small High Performance Computing (HPC) setups. The cost was not just the hardware, software vendors priced for the hardware configuration which made this an expensive solution.

Moving BI to the desktop changed this. By taking advantage of unused local processing power and drastically reducing the cost of the tools, BI quickly found itself being used more widely.

In the last decade, however we have seen the massive explosion of multi-core, multi-socket commodity servers, blade systems and motherboard capable of supporting 512GB of RAM. Alongside this have come virtualisation, fast Storage Area Networks (SANS) and huge storage arrays. So is now the time to consider moving BI back to the datacentre?

IBM thinks so and its reasons make a lot of sense.

While BI at the desktop has been a success, IBM points to the fact that it has some serious shortcomings. One of these is “versions of the truth”. The issue here is that users can often be working off of the same dataset at different points in time. If that data is not linked to the same core data or has been derived from another users dataset then there are inconsistencies within the data. Anyone making business decisions is not doing so with the best data around.

Another issue is data control. As data is pulled down by users, it is “lost” by the datacentre. While the original data remains, the copies can often be stored locally and as few organisations have a truly universal backup approach to all their compute devices, this data disappears from view. In many cases, it is still owned by the organisation but there exists the potential for data to be removed from the organisation and that is a much more serious matter.

Bandwidth can also become a significant challenge. If the users and the data are co-located then this is just an issue of the LAN. But where data is remotely located, perhaps in a different country or even in the Cloud, there will be charges to moving such large amounts of data.

As BI evolves we are seeing users wanting to go further than just working with a subset of corporate data. They want and are being encourage to access data from other sources. A typical example here is geodata and census information from government and other sources. This allows sales teams to get a very detailed picture of who and where the products are being sold and from this they can build much more effective sales plans.

The challenge here is that the data will often be in different formats and may not even have common fields or data types. This means that the BI users needs very advanced tools, a lot of knowledge and the processing power to make sense of all of this data.

Proponents of localised BI accept that bandwidth and data security are an issue and rightly point to the fact that these are wider issues than just BI and on this they are right. They also point to the increasing power of desktop tools and the power of local machines. So does IBM actually have a case?

The answer is yes.

The commodity server explosion has ensured that the datacentre has more resources than ever before. With virtualisation, those resources are flexible and can be applied as required. Commodity servers have also made HPC easier and cheaper to implement and this is one of the reasons why IBM believes that it is time to bring HPC and BI back together again.

One of the key advantages of modern HPC is that it is capable of taking advantage of parallel programming and for complex BI applications, parallelism offers a significant advance in performance. While desktops can run individual analysis of datasets, an HPC array using parallel processing can outperform a large number of desktops.

More importantly, that HPC array is working off of a single master dataset. This means that there is a single enterprise version of the truth as far as the data goes. The data is kept synchronised and not lost to the datacentre. Data security and accuracy get enhanced and it is much easier to meet your legal and compliance obligations.

As the HPC array will be located in the same place as the SAN, the network can be tuned to ensure that the data is delivered optimally. This means no delays due to external systems and, financially, there are no huge penalties for moving large volumes of data in and out of the Cloud.

This does not mean that there is no place for fine tuning of data locally, but it does reduce the amount of data moved and ensures that the vast bulk of the processing is done much more efficiently.

Integrating multiple different types of data sources is also something that can be best done at the IT department level making it easier for users to then take advantage of data from different places and allowing them to spend their time focusing on getting answers rather than manipulating data.

On principle, I am not a great fan of pulling everything back into the datacentre but IBM does make a compelling case here. I would even go as far as saying that if you move the end-user BI tools onto virtual desktops or terminal services that are run in the same location as the HPC and BI software, you would gain even more advantages in terms of performance and security.


June 28, 2010  9:46 AM

Learning to live with MeeGo

Cliff Saran Profile: Cliff Saran
Linux, MeeGo, Netbook

I have been using MeeGO 1.0 for almost four weeks. My first impression is that, out of the box, MeeGo is an excellent OS for Net Book users. It boots up quickly and has a clean desktop user interface, which makes it very easy to see running apps. It is nice to see that Intel is now demoing a version with touch functionality, which will make the UI truly excellent for mobile phone and Net Book users.

 

The main problem with MeeGo – and for that matter any Linux distro – is that if you want to do something not included in the distribution, things can become increasingly difficult.

 

I spent last weekend hacking the MeeGo OS to try and get Audacity, the open source sound editor, to work. You’ll need a load of source code/dev.lib packages, to provide the development tools and various libraries, in order to ./configure audacity successfully. My estimate, is that a make compilation takes around 40 mins. Installation through make install puts the Audacity icon on the MeeGo desktop, which does save an awful lot of time figuring out how to do this.Here’s a screenshot of it running on MeeGo…but it’s not quite working yet. I still need to figure out how to configure Audacity to use my sound device.

 

audacity.jpg


June 11, 2010  2:09 PM

Quest’s TOAD hops onto the Cloud

imurphy Profile: imurphy
cloud, DBA, Quest Software, SQL Azure

Next week Quest Software will announce the release of TOAD for Cloud, extending the reach of its database tools into the Cloud environment. In the first version there will be support for four datasources – Amazon SimpleDB, Microsoft SQL Azure, Apache HBase and any database with an ODBC driver.

TOAD for Cloud will be freely available off of the Quest website and according to Brent Ozar at Quest Software,” the target market is DBAs and data analysts who need to be able to do cross joins between Cloud platforms and their existing databases inside the organisation.”

Ozar has already said that Quest will be adding support for more databases over time including the Apache Cassandra project but stopped short of identifying Oracle, Sybase and DB2 as early targets despite the fact that all three either have either shipped or have announced Cloud versions of their products.

As this is built on the same underlying toolset as TOAD for Data Analysts, it is likely that the full reporting capabilities of that product will be available to the TOAD for Cloud product soon.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: