Contact Sales
Sign Up Log In

General Concepts in Flows

FlowsJuly 30, 2025

Because Flows will be a new concept to many of you it is important to understand the general concepts that flows are built upon. These concepts are:

  • Many tasks are repetitive
  • These tasks should be done consistently
  • These tasks can often be split into small pieces.

What Are Flows?

Flows are batch processing system that combine Flow tools to perform operations consistently across a dataset, and are especially useful for repetitive tasks like gridding data. For a more complete description, read our What are Flows? help article. Each flow consists of a series of steps that can be split into three parts:

  1. Data input
  2. Data processing
  3. Data output

The data processing step may contain several Flow tools that each perform a specific task (e.g., a tool to baseline the sp curve, another to remove bad values, and a third to rename the curve). The data processing step may also convert one data type to another. For example, the CpiLogCalc tool may be used to summarize log data and convert it to points, which PointsToGrid may then turn into a grid that can be written out.

Flows Don’t Replace Petrophysics, DCA, or Mapping

Flows are NOT a replacement for your interpretations. They are a compliment to the petrophysics, decline curve analysis, and mappign capabilities that can be used both beforehand and afterwards. For example, Flows may be used to construct a set of grids based on one’s petrophysical interpretation. Or, a Flow may be used to perform some data management or pre-processing before starting an interpretation.

Flows Tools

Flows are constructed using a series of pre-defined tools. It is often useful to think of these as building blocks that stack on one another allowing you to combine several simple tasks into an overal complex operation. There are essentially three types of tools: Input tools, manipulation or calculation toolss, and output tools.

Input Tools

Danomics handles many data types. Here are some example data types and the corresponding data input tool you would use to access that data in a Flow:

DescriptionDatabase TypeFlow Tools for Input
Well log dataldbLogInput
Formation topstopsPointsInput
Well header datawdbPointsInput
Core dataPointsPointsInput
Deviation surveysPointsPointsInput
Seismic dataSegYSegYInput
Gridded dataGridGridInput

Processing Tools

Once you have selected the data to work with, you probably want to manipulate it in some way. For example, if working with logs you may want to perform a calculation or remove illogical values. You may then wish to summarize the data on a per zone basis and then grid it. (see example in What are Flows? article).

There are dozens of tools for processing data. A complete listing can be found on this page along with the license type required to run them: Features by License Type.

Output Tools

After you have performed an operation on your data will write that data to a new database or file. Output tools include:

Data TypeFlow Tool
PointsPointsOutput
LogsLogOutput
TopsTopsOutput
SeismicSegYOutput
BrickedSeismic / 3D ModelsBrickOutput
GridsGridOutput

Tips and Tricks

The primary concept in Flows is that data is input, processed, and output. Each step of the Flow utilizes a tool to performa specific operation. These tools can be stack on top of one another like building blocks to allow for increasingly complex operations to be performed. Flows both reduce the amount of manual interaction and button clicks required to generate work products and ensure consistency by creating a series of transparent steps along the way. Things to keep in mind:

  • You almost always need to start a Flow by bringing data into it with an input tool
  • You almost always want to end a Flow by writing the data out using an output tool
  • You can perform a wide range of operations between the input and outputs. For example, you can do mathematical calculations, filter data, rename data, and convert data to other types.
  • Always make sure that the output has a name different than the input. For example, you would not want to use a LogInput to bring in MyLogs.ldb and a LogOutput to write MyLogs.ldb in the same Flow as this would cause the output tool to try and write over the input, causing data corruption.

Tags

Related Insights

DCA

DCA: Type well curves

In this video I demonstrate how to generate a well set filtered by a number of criteria and generate a multi-well type curve. Before starting this video you should already know how to load your data and create a DCA project. If not, please review those videos. Type well curves are generated by creating a decline that represents data from multiple wells.

July 29, 2025
DCA

DCA: Loading Production data

In this video I demonstrate how to load production and well header data for use in a decline curve analysis project. The first step is to gather your data. You’ll need: Production data – this can be in CSV, Excel, or IHS 298 formats. For spreadsheet formats you’ll need columns for API, Date, Oil, Gas, Water (optional), and days of production for that period (optional). Well header data – this can be in CSV, Excel, or IHS 297 formats.

July 29, 2025
General

Sample data to get started

Need some sample data to get started? The files below are from data made public by the Wyoming Oil and Gas Commission. These will allow you to get started with petrophysics, mapping, and decline curve analysis. Well header data Formation tops data Deviation survey data Well log data (las files) Production data (csv) or (excel) Wyoming counties shapefile and projection Wyoming townships shapefile and projection Haven’t found the help guide that you are looking for?

July 9, 2025by Cameron Snow

Get a Personal Demo

Unlock the subsurface with Danomics