DBTA E-EDITION

Subscribe to the DBTA E-Edition email newsletter




DBTA E-EDITION
February 2013

Subscribe to the online version of Database Trends and Applications magazine. DBTA will send occasional notices about new and/or updated DBTA.com content.


Trends and Applications

Having vast amounts of data at hand doesn't necessarily help executives make better decisions. In fact, without a simple way to access and analyze the astronomical amounts of available information, it is easy to become frozen with indecision, knowing the answers are likely in the data, but unsure how to find them. With so many companies proclaiming to offer salvation from all data issues, one of the most important factors to consider when selecting a solution is ease of use. An intuitive interface based on how people already operate in the real world is the key to adoption and usage throughout an organization.

In-memory technology—in which entire data sets are pre-loaded into a computer's random access memory, alleviating the need for shuttling data between memory and disk storage every time a query is initiated—has actually been around for a number of years. However, with the onset of big data, as well as an insatiable thirst for analytics, the industry is taking a second look at this promising approach to speeding up data processing.

A profound shift is occurring in where data lives. Thanks to skyrocketing demand for real-time access to huge volumes of data—big data—technology architects are increasingly moving data out of slow, disk-bound legacy databases and into large, distributed stores of ultra-fast machine memory. The plummeting price of RAM, along with advanced solutions for managing and monitoring distributed in-memory data, mean there are no longer good excuses to make customers, colleagues, and partners wait the seconds—or sometimes hours—it can take your applications to get data out of disk-bound databases. With in-memory, microseconds are the new seconds.


Columns - Notes on NoSQL

Hadoop is the most significant concrete technology behind the so called "Big Data" revolution. Hadoop combines an economical model for storing massive quantities of data - the Hadoop Distributed File System - with a flexible model for programming massively scalable programs - MapReduce. However, as powerful and flexible as MapReduce might be, it is hardly a productive programming model. Programming in MapReduce reminds one of programming in Assembly language - the simplest operations require substantial code.


Columns - Database Elaborations

Establishing a data warehousing or business intelligence environment initiates a process that works its way through the operational applications and data sources across an enterprise. This process focuses not only on identifying the important data elements the business lives and breathes, but the process also tries very hard to provide rationality in explaining these elements to business intelligence users.


Columns - DBA Corner

Enterprise developers these days are usually heads down, in the trenches working on in-depth applications using Java or .NET with data stored in SQL Server or Oracle or DB2 databases. But there are other options. One of them is FileMaker, an elegant database system and development platform that can be used to quickly build visually appealing and robust applications that run on Macs, Windows PCs, smartphones, and iPads.


Columns - SQL Server Drill Down

Today, I would like to give you a primer on how to read the benchmark reports that are published by the major database and hardware vendors. You never know when a vendor will publish a new benchmark. There's no set schedule for them to publish their test findings. Of course, you can always look for new advertisements from many of the vendors. But that's very imprecise.


MV Community

Entrinsik has launched a traveling road show to provide Informer training. The idea for the Informer Power Users Road Show grew out of the success of the ICON Informer user conference and the feedback Entrinsik was receiving about the Informer customer service team's on-site work with individual clients, Sharon Shelton, vice president of marketing at Entrinsik, tells DBTA. "The road shows are really an answer to the question: How do we provide this service for more of our client base - especially those who can't make it to ICON, or don't have the resources to afford implementation and training services on-site. We are trying to get the best information out to customers in a way that is cost-efficient for them and for us," Shelton says.

MultiValue technology veteran Mark Pick recently launched a new company, Pick Cloud, Inc. Here, Pick - whose father Dick Pick is widely credited as having been a founding father of MultiValue software - discusses the reasons he embarked on the new venture and his goals for the company. "A lot of people worry about latency issues in moving to the cloud, but of all the applications that exist out there, a traditional Pick application is very thin, very low bandwidth so that wasn't going to be an issue," Pick tells DBTA. "And, when we go down the list of things that we can provide for our customers, number-one, it is fast and easy to deploy."

Revelation Software has announced Revelation Universal Driver Heavy (UDH) v.4.7. The UDH is client/server software designed to allow real-time mirroring of Revelation linear hash data. The UDH enables a business to switch immediately if needed from a primary server to a secondary, which is essential for companies with 24 x 7 operations, Robert Catalano, director of sales at Revelation, tells DBTA.

Sponsors