Skip to main content

‘D’ of Things: A look back into 2017 and forward to 2018

(Image by Elisa Riva)

As we inevitably approach the end of the year ーa year marked with many important advances in all areas of the data management spaceー I just can’t avoid thinking with expectation and excitement what should be there just around the corner for 2018.

If 2017 was all but boring, 2018 looks like another promising one, no less fast and competitive than 2017.

But after what Niels Bohr once said:

Prediction is very difficult, especially if it's about the future.

I will avoid making big prediction statements and instead, take a look at some relevant things happened this ending year and what will be interesting to follow closely for next year.

Still, feel free to call me out next year on what I missed.

A look back at 2017

2017 has been a year full with exciting events and news and yet the following events are, in my view, those that deserve much more attention due to the transformational nature of the industry.

So here, and in no particular order, a summary of relevant trends in 2017:

Security and Governance for Big Data

As existing and new big data projects evolve —along with the myriad of data security incidents happened through the year— it is only natural that companies take further steps to make these projects interrelate more, and more efficiently with the rest of the enterprise software stack yet keeping them safe and secure. More than ever, companies need to reinforce data protection measures and their ability to govern access and usage increases as well.

Many discussions and work has been done in previous years to enable companies to increase their data governance practices, yet 2017 was especially important for companies both in the user and vendor sides.

This year major efforts were done to try crystallize these practices to consolidate big data governance and efficient security and data protection so users can get their hands on the data they need in the safest and more efficient way possible, enabling or forcing vendors to increase the capabilities of their existing data governance solutions, especially in relation with big data and Internet of Things (IoT) initiatives.

Two identifiable trends seemed to emerged as many organizations aim to consolidate big data projects within their existing data platforms in a secure and efficient way:

  • One come with the growth of a new generation of solutions that incorporate capabilities for governing and securing big data sources, offerings in the likes of Collibra’s Data Governance Center, Alation Data Catalog or coming from large data management companies, including IBM’s Infosphere Information Governance Catalog or SAS Data Governance are enabling a new generation of data governance solutions with specific capabilities for dealing with big data sources.
  • The second trend is the increased interest for integrated solutions able to view data management and governance initiatives from a single lens and via a relatively new way of data management organization called the “data lake”. This includes data lake management platforms, like those offered by Zaloni and  Podium Data, or solutions from software power houses like Microsoft with its data lake solution or Informatica’s Intelligent Data Lake.

Without a doubt, 2017 signaled an increasing consciousness within many organizations about the importance of adopting comprehensive enterprise approaches for their data management initiatives and this, to enable smoother consolidation and efficiency. A trend I certainly expect to evolve further in through 2018.
(post-ads)
Analytics consolidation: a data scientist dream come true?

Especially in 2017, analytics was a software market on fire, especially with the increasing incorporation of machine learning, artificial intelligence and its continuous integration with existing business intelligence (BI) and enterprise performance management (EPM) solutions.

It seems an internal revolution is taking place within the realm of the analytics market with many things happening at once, including the evolution and incorporation of the so-called “data science” within many organizations, which controversies aside, has opened new avenues for the development of a new generation of analytics platforms and the emergence of new types of analytics specialists and information workers.

In 2017 we witnessed this evolution as on one side, a new generation of analytics platforms  consolidated many analytics capabilities within one single platform while others went further to incorporate the ability to automate many of the required data management processes that need to happen before including data profiling, preparation and integration capabilities.

New companies designed to be data science platforms include now the ability to consolidate many, if not all functional features for performing a full advanced analytics cycle and this trends was particularly clear through 2017.

Companies like Alpine Data, Dataiku, or DataRobot are taking data science to the enterprise software mainstream, while others like Emcien or BigML are taking innovative approaches to provide self-service and easy to use approaches using advanced and automated algorithms to effectively solve practical cases.

Additionally, major BI and analytics players have been working to take their offerings to the next stage or, even come up with brand new solutions. Examples include companies like Tableau or Qlik, the first is now working to put technology from previous acquisitions to work —including its new Hyper high performance database or former natural language startup Cleargraph—  and  expand Tableau’s analytics capabilities, the latter with new offerings including Qlik Sense and its relatively new Qlik Analytics Platform.

Moreover, and to add to this trend, during the second half of 2017 global software power houses announced or released brand new analytics platforms including SAP with SAP Data Hub and SAP Vora, Teradata’s new Analytics Platform and Intellisphere as well as IBM’s Integrated Analytics Platform.

Another key element includes the emergence of a group of next generation BI solutions both in the cloud and on-premises, all with a myriad of capabilities for analyzing new data sources and incorporating many key new features for making BI easier to handle and integrate with third party applications. Some solutions worth to take a look at include: Dundas BI, AtScale, Yellowfin, Pyramid Analytics or Phocas, to name just a few.

Preparing for the next database revolution?

While less hype at times, and with most of the interest placed on the final consumer portion of the data management software: analysts, data scientists CxO’s and others, less attention is played to what is going on with key data technologies including the database market and many of its derivatives and yet, a lot is happening in this area, so here some of the most notably events in this area in 2017.


  • The commoditization of the In-Memory DB

A continuous increase in the number and complexity of transactions to be managed, and despite the hype big data and analytics had within the software industry companies, vendors and consumers, seem to have gained renewed interest on new database management technologies for transactional systems, especially aiming to maintain efficiency over extreme transaction processing.

Renewed interest from buyers and software vendors to keep the pace with this phenomena has not only remained but somehow increased, especially in key areas where extreme transaction processing occurs the most: communications, finance and others, triggering buyers and software vendors interest for producing and deploying faster and better technology.

During the last couple of years, and especially in 2017, the adoption of in-memory technologies applied to transactional database systems have gained significant interest, especially within large companies which are renewing/updating their existing database system solution offerings.

Examples include SAP with HANA and most recently with SAP ASE’s incorporation of in-memory processing for extreme transaction processing, Oracle and its in-memory options, SQL Server in-memory capabilities or smaller yet powerful proponents including McObject, Altibase or VoltDB.

  • Distributed databases

As businesses continue to globalize operations, so the need for databases that can scale-out and carry with massively scalable applications across the globe.

While not new, distributed databases where particularly highlighted this year with the releases made by major software powerhouses: Cloud Spanner by Google, and Azure Cosmos DB by Microsoft.

These two announcements reminded us how important new database technologies will be for supporting next generation software solutions in the years to come so, it might be logical to suspect players in this field including those like GridGain and Clustrix, as well as other established players like Apache Cassandra-based company DataStax will enter to a next face of competition for new opportunities in markets like mobile and IoT. I suspect there is way more to come in this coming years.

  • Database and containers

One thing worth to follow next year and in the years to come will be the incorporation of database into containers.

An interesting series of discussions in favor —faster and automated deployments— and against —potential networking and security concerns— databases in containers have been published, analyzing the feasibility, benefits and challenges of databases offered in containers and yet, 2017 marked a significant movement towards the offering of database and data management solution images within containers, examples include Microsoft SQL Server or Cloudera on Docker.

Now, will this be a successful trend?
Only time will tell but despite documented challenges and failures it will be fair to assume the container-database combination will evolve well enough to become a viable option for some organizations.

2018: Yep, I’m looking forward to it

From what I witnessed this year, no sign we will be slowing down soon and, I suspect we will be again “drinking from the fire hose” in 2018, with many upcoming innovations.

And while I expect much more than this, I have prepared a small yet meaningful list of the things I personally would keep a close eye on this year. So I’m sharing it so you might want to keep on eye to:

The rise of database full self service and automation

Just in October, Oracle unveiled its Oracle Autonomous Database Cloud, setting the stage to what could be an interesting battlefield in the database scene as the rest of competitors take steps to follow suite or even disrupt the market with technology innovation in this field to build and release fully autonomous databases offerings.

It will also be exciting to see Oracle’s autonomous database become a reality this year and evolve while we wait for a new generation of fully automatic databases.

BTW- Personally, I don’t expect DBA’s to disappear any time soon.

The Rise of the GPU?

As many organizations try to cope with an increasing need for faster and better ways to perform advanced analytics, new technologies continue to be developed to improve data management and analytics performance and capabilities, the Graphics Processing Unit or GPU is one of those technologies, and one with huge potential.

Originally used for gaming, this processing unit is now increasingly being used for performing analytics.

Interesting will be to check how this new processing unit evolves and its embedded within the software mainstream in 2018 and in the future.

Security & Privacy

Of course, and not surprisingly, security and privacy will be in the headline for a long time.

By the way, how are you doing with your GDPR compliance project?

Surely, following progress made by companies for GDPR compliance will a topic to follow this year as the deadline approaches this year and of course the aftermath of its implementation and reinforcement. Interesting will be to see the effects and impact for companies in security, analytics and data governance during and after its implementation.

Security Analytics

Finally, in this topic, another aspect worth following in 2018 is the rise of security analytics platforms, the potential and evolution of these tools as well as the impact and effect these solutions have over organizations’ general security and privacy strategies.

So much more to come...

Of course, there is a lot more worth to cover during 2018 but while I’m writing this my head keeps bringing topics and I fear I’ll never stop and,  well, I need to do some actual work now.

But before I go, I want to thank you for being a reader of this blog during 2017, exciting things will come also for 2018 for the ‘D” of Things, so stay tuned and please feel free to let me a comment in the space blow.

Finally, I wish you all a successful 2018, full of goals accomplished and health for you and your loved ones.

Comments

  1. Thanks for sharing it. I got very valuable information from your blog.your post is really very informative.

    ReplyDelete
  2. Interesting will be to check how this new processing unit evolves and its embedded within the software mainstream in 2018 and in the future. tiffany necklace chile , tiffany necklace germany

    ReplyDelete
  3. I got very good points in this article. Its simple to understand this one. Keep posting are going to be expecting in your next blog. Now it's time to avail jupiter florida airport for more information.

    ReplyDelete
  4. Predicting future outcomes is a challenging undertaking, especially given the inherent complexity of forecasting.If some one wants Essay Writing Service Uk turnout to the master essay writers for all the academic task.



    ReplyDelete

Post a Comment

Popular posts from this blog

Machine Learning and Cognitive Systems, Part 2: Big Data Analytics

In the first part of this series, I described a bit of what machine learning is and its potential to become a mainstream technology in the industry of enterprise software, and serve as the basis for many other advances in the incorporation of other technologies related to artificial intelligence and cognitive computing. I also mentioned briefly how machine language is becoming increasingly important for many companies in the business intelligence and analytics industry. In this post I will discuss further the importance that machine learning already has and can have in the analytics ecosystem, especially from a Big Data perspective. Machine learning in the context of BI and Big Data analytics Just as in the lab, and other areas, one of the reasons why machine learning became extremely important and useful in enterprise software is its potential to deal not just with huge amounts of data and extract knowledge from it—which can somehow be addressed with disciplines such as data

Next-generation Business Process Management (BPM)—Achieving Process Effectiveness, Pervasiveness, and Control

The range of what we think and do is limited by what we fail to notice. And because we fail to notice that we fail to notice there is little we can do to change until we notice how failing to notice shapes our thoughts and deeds. —R.D. Laing Amid the hype surrounding technology trends such as big data, cloud computing, or the Internet of Things, for a vast number of organizations, a quiet, persistent question remains unanswered: how do we ensure efficiency and control of our business operations? Business process efficiency and proficiency are essential ingredients for ensuring business growth and competitive advantage. Every day, organizations are discovering that their business process management (BPM) applications and practices are insufficient to take them to higher levels of effectiveness and control. Consumers of BPM technology are now pushing the limits of BPM practices, and BPM software providers are urging the technology forward. So what can we expect from the next

Teradata Open its Data Lake Management Strategy with Kylo: Literally

Still distilling good results from the acquisition of former consultancy company Think Big Analytics , Teradata , a powerhouse in the data management market took one step further to expand its data management stack and to make an interesting contribution to the open source community. Fully developed by the team at Think Big Analytics, in March of 2017 the company launched Kylo –a full data lake management solution– but with an interesting twist: as a contribution to the open source community. Offered as an open source project under the Apache 2.0 license Kylo is, according to Teradata, a new enterprise-ready data lake management platform that enables self-service data ingestion and preparation, as well the necessary functionality for managing metadata, governance and security. One appealing aspect of Kylo is it was developed over an eight year period, as the result of number of internal projects with Fortune 1000 customers which has enabled Teradata to incorporate several be