Deploying the DICE Simulation Tool in the News Orchestrator DIA

 Uncategorized  Comments Off on Deploying the DICE Simulation Tool in the News Orchestrator DIA
Jan 162018

Scalability, bottleneck detection and simulation/predictive analysis are some of the core requirements for the News Orchestrator DIA. The DICE Simulation tool promises that it can perform a performance assessment of a Storm based DIA that would allow the prediction of the behaviour of the system prior to the deployment on a production cloud environment. The News Orchestrator engineers are often spending much time and effort in order to configure and adapt the topology configuration according to the target runtime execution context. Introducing a tool that can perform such a demanding task efficiently would clearly increase the developer’s productivity and also facilitate their testing needs.

Continue reading »

DevOps: A Quality Assessment Experience

 Uncategorized  Comments Off on DevOps: A Quality Assessment Experience
Dec 132017

In a previous article of this Blog, we discussed the importance of assessing quality during the development of data intensive applications (DIA). In particular, we explored the performance and reliability properties of DIA and presented a Simulation tool (SimTool) that helps on this purpose. This article extends such contribution, concretely for addressing the quality topic in the DevOps context. The core idea of DevOps is to foster a close cooperation between the Dev and Ops teams. Probably, the reader will also be interested on taking a look of what the DICE project proposes at this regard.

Continue reading »

Detecting Anomalies during App Development

 Uncategorized  Comments Off on Detecting Anomalies during App Development
Oct 312017

During the development phase of a Data Intensive Application (DIA) using Big data frameworks (such as Storm, Spark, etc.) developers have to contend with not only developing their application but also with the underlying platforms. During the initial stages of development bugs and performance issues are almost unavoidable and most of the time hard to debug using only the monitoring data. The anomaly detection platform is geared towards automatically checking for performance related contextual anomalies.

Continue reading »

5 Reasons to Use Fault Injection in DevOps

 Uncategorized  Comments Off on 5 Reasons to Use Fault Injection in DevOps
Oct 312017

Bringing development and IT operations together can help address many application deployment challenges. To address areas of quality these challenges require a toolset to manage and measure performance and improve reliability. There is a need for not only resilient platforms but also robustness of the data intensive applications that run inside them.

A Fault Inject Tool (FIT) is part of that solution and one way to provoke scenarios which will emphasise the impact of sometimes otherwise hidden issues. The FIT enables controlled causing of cloud platform issues such as resource stress and service or VM outages, the purpose being to observe the subsequent effect on deployed applications.

The FIT is being designed for use in a DevOps workflow for tighter correlation between application design and cloud operation, although not limited to this usage, and helps improve resiliency for data intensive applications by bringing together fault tolerance, stress testing and benchmarking in a single tool. Here are 5 compelling reasons why a FIT tool is useful for the developers of data intensive applications.

Continue reading »

JMT Petri Net Extension for Performance Analysis of Big Data Applications

 Uncategorized  Comments Off on JMT Petri Net Extension for Performance Analysis of Big Data Applications
Sep 102017

JMT (Java Modelling Tools) is an integrated environment for performance evaluation, capacity planning and workload characterization of computer and communication systems [1]. A number of cutting-edge algorithms are available for exact, approximate and asymptotic analysis of queueing networks (QNs), with either product-form or non-product-form solutions. Users can define and solve models through a well-designed graphical interface, or optionally an alphanumeric wizard. Released under GPLv2, JMT benefits a large community of thousands of students, researchers and practitioners, with more than 5,000 downloads per year.

Continue reading »

“Gazing” the Clouds: Cloud Applications Monitoring, and what’s going on in industry…

 Uncategorized  Comments Off on “Gazing” the Clouds: Cloud Applications Monitoring, and what’s going on in industry…
Sep 042017

The advent of cloud computing triggered a huge change in software release cycles for an increasing number of companies embracing cloud technologies as the 21st century’s technological utility… Where once your company invested in large, upfront investments in physical servers, that same strategy is increasingly being replaced by on-demand and pay-per-use cloud access – at the same time, complex manual deployment procedures are increasingly being automated in the context of DevOps and connected technologies… What is the organizational and technical consequence of these phenomena?

Continue reading »

Release 0.3.4 of DICE Deployment Service

 Uncategorized  Comments Off on Release 0.3.4 of DICE Deployment Service
Jun 192017

We are happy to announce the release 0.3.4 of our DICE Deployment Service and version 0.7.0 of the DICE TOSCA technology library. With these components, we aim to remove one big hurdle on the path to the world of Big Data: setting the components up and wiring them to have all the parts play along nicely. We also want to enable the users to easily run their application in a number of private and public clouds without any worry of being locked into a particular one. This release introduces a unified approach to deploying blueprints to OpenStack, Amazon EC2 or Flexiant Cloud Orchestrator without needing to change anything in the blueprint.

Continue reading »

Rich Client Platform for the DIA-integrated Development

 Uncategorized  Comments Off on Rich Client Platform for the DIA-integrated Development
Mar 072017

DICE focuses on the quality assurance for data-intensive applications (DIA) developed through the Model-Driven Engineering (MDE) paradigm. The project aims at delivering methods and tools that will help satisfying quality requirements in data-intensive applications by iterative enhancement of their architecture design. One component of the tool chain developed within the project is the DICE IDE. It is an Integrated Development Environment (IDE) that accelerates the development of data-intensive applications.

The Eclipse-based DICE IDE integrates most of the tools of the DICE framework and it is the base of the DICE methodology. As highlighted in the deliverable D1.1 State of the Art Analysis, there does not exist yet any MDE IDE on the software market through which a designer can create models to describe and analyse data-intensive or Big Data applications and their underpinning technology stack. This is the motivation for defining the DICE IDE.

The DICE IDE is based on Eclipse, which is the de-facto standard for the creation of software engineering models based on the MDE approach. DICE customizes the Eclipse IDE with suitable plug-ins that integrate the execution of the different DICE tools, in order to minimize learning curves and simplify adoption. In this blog post we explain how the DICE tools introduced to the reader earlier have been integrated into the IDE. So, How’s the DICE IDE built?

Continue reading »

Apache Cassandra: From Design to Deployment

 Uncategorized  Comments Off on Apache Cassandra: From Design to Deployment
Feb 022017

In spite of its young age, the Big Data ecosystem already contains a plethora of complex and diverse open source frameworks. They are commonly of two kinds: data platform frameworks, which deal with the needed storage scalability, or processing frameworks, which aim to improve query performance [1]. A Big Data application is generally produced by combining them in a smooth way. Each framework operates with its own computational model. For example, a data platform framework may manage distributed files, tuples, or graphs, and a processing framework may handle batch or real-time jobs. Building a reliable and robust Data-Intensive Application (DIA) consists in finding a suitable combination that meet requirements. Besides, without a careful design by developers on the one hand, and an optimal configuration of frameworks by operators on the other hand, the quality of the DIA cannot be guaranteed.

In this blog post we would like to mention three simple principles we have learned while we were building our Big Data application:

  1. Using models to synchronize the work of developers and operators;
  2. Designing databases so that we do not need to update or delete data; and
  3. Letting operators resolve low-level production-specific issues.

Continue reading »

Formal Verification of Data-Intensive Applications with Temporal Logic

 Uncategorized  Comments Off on Formal Verification of Data-Intensive Applications with Temporal Logic
Dec 052016

Beside functional aspects, designers of Data-Intensive Applications have to consider various quality aspects that are specific to the applications processing huge volumes of data with high throughput and running in clusters of (many) physical machines. A broad set of non-functional aspects positioned in the areas of performance and safety should be included at the early stage of the design process to guarantee high-quality software development.

The evaluation of the correctness of such applications, and when functional and non-functional aspects are both involved, is definitely not trivial. In the case of Data-Intensive Applications, the inherent distributed architecture, the software stratification and the computational paradigm implementing the logic of the applications pose new questions on the criteria that should be considered to evaluate their correctness.

Continue reading »