What Are Flows?
Flows are a batch processing system and how Danomics handles many of the repetitive tasks that are part of your day-to-day workflows. Whatever you are trying to accomplish – there’s a Flow for that!
Flows will be a new concept to many users, but the general idea is that Flows comprise individual tools that work in series to complete a task. Let's take the example of making a set of grids for porosity and water saturation on a zone-by-zone basis. You would need to do the following:
| Step | Description | Flow Tool |
|---|---|---|
| 1 | Get the log data | LogInput |
| 2 | Calculate PhiT and Sw | CpiLogCalc |
| 3 | Average them by Zone | CpiLogCalc (as above) |
| 4 | Grid the results | PointsToGrid |
| 5 | Write out the grid | GridOutput |
The Flow for accomplishing this is shown below.
This Flow does the following. It brings the well log data into the Flow using the LogInput tool. Then the CpiLogCalc tool pulls in the petrophysical interpretation and zone information to calculate the average PhiT and average Sw. CpiLogCalc in this case will output points on a per zone, per curve basis. PointsToGrid then converts the points to a grid, and GridOutput writes a multigrid file that will contain the grids for both the average porosity and average water saturation for every zone (you can write 100s of grids in one Flow such as this!)
This Flow is also completely re-usable. Want to update your interpretation and remake all the grids? No problem. Just update the interpretation in your CPI and then re-run this Flow. Want to use it on a different log database or petrophysical interpretation? No problem, swap out the relevant info in each Flow tool (or make a copy of it and then swap it out so you still have your original Flow to reference)
To better learn this process, two examples are provided below.
Gridding Petrophysical Results
Building on the example above, let's say we wanted to automatically detect and remove outliers, grid the results, apply a smooth, and write out the smoothed and unsmoothed versions for comparison. To do this we'd want to build a Flow that is structured as such:
| Step | Description | Flow Tool |
|---|---|---|
| 1 | Get the log data | LogInput |
| 2 | Calculate the properties | CpiLogCalc |
| 3 | Average and/or sum by zone | CpiLogCalc |
| 4 | Remove outliers | PointsSpatialFilter |
| 5 | Grid the results | PointsToGrid |
| 6 | Write unsmoothed grid | GridOutput |
| 7 | Smooth the grids | GridSmooth |
| 8 | Write smoothed grid | GridOutput |
This flow will look like:
Once this Flow is run it will write out two multigrid files - one before smoothing and one after smoothing that includes the average porosity and water saturation for each zone.
What if you wanted to update it to include properties such as net pay and net reservoir thickness? No problem, just add those properties and re-run it. All that will change is the config equations I select. And if I want to add OOIP, hydrocarbon pore volume, and average properties for all my input curves? No problem, just add those as well as shown here:
Now this Flow is gridding 11 properties for the 10 zones I have defined in my petrophysical interpretation (cpi). This means the resulting .grid files will contain 110 grids each - and all this was done by running one Flow.
Log Cleanup Flow
Another common task is cleaning log data. Although this can be done in the petrophysical interpretation it is often beneficial to do some basic cleanup and pre-processing ahead of time. An example of such a Flow is shown here:
In this flow we undertake the following steps:
| Step | Description | Flow Tool |
|---|---|---|
| 1 | Get the log data | LogInput |
| 2 | Fix depth curve problems | FixLogDepthProblems |
| 3 | Convert units to standards | LogUnitsConversion |
| 4 | Handle GRD name collisions | InferSGRDLogType |
| 5 | Handle ambiguous lithology references | InferPorosityLogType |
| 6 | Handle ambigous resistivity types | InferResistivityLogType |
| 7 | Base the SP Curve | SPCurveBaselining |
| 8 | Remove flatspots in log data | NullRepeatedLogSamples |
| 9 | Remove entirely null curves | RemoveAllNullLogs |
| 10 | Write out a new log database | LogOutput |
This Flow performs 8 processing tasks and can be used on any log database. Like our other Flows it can also be easily modified to include more processing steps. For example, let's say I wanted to resample all my logs to be on a 0.5' depth step. I could add a LogResample tool. Let's also say I noticed that there were gamma ray curves with negative values. I could use a LogMath tool to null out those values. The new Flow would look like this:
You are probably now starting to understand how Flows can be assembled, modified, and re-used to make your workflows more efficient. For more examples, please look at the links on the Flows Help Articles page.
Flows for (Almost) Anything
If analyze the parts of your workflow that are common across all of your projects, you will notice patterns. Flows will help you streamline those parts of your workflow. Here are some of the areas where we see flows being especially useful:
- Gridding data for maps (e.g., structure, isopach, or reservoir property maps)
- Cleaning well log data
- Renaming well logs en masse for export and archiving
- Building machine learning powered Flows for predicting missing logs
- Identifying landing zones across 1000s of horizontal wells
- Filtering and finding data that meet certain criteria (e.g., all the wells that have a full triple combo across the Wolfcamp C formation)
- Finding wells that have a certain formation top (e.g., all wells that have a Wolfcamp C and Wolfcamp D top pick)
- Extracting values from grids, 3D property models, or seismic data
- Building custom Flows using Python
The list above is to just give you a few ideas on how you can use Flows. With a bit of patience and imagination you can literally do almost anything with Flows.
Related Insights
DCA: Type well curves
In this video I demonstrate how to generate a well set filtered by a number of criteria and generate a multi-well type curve. Before starting this video you should already know how to load your data and create a DCA project. If not, please review those videos. Type well curves are generated by creating a decline that represents data from multiple wells.
DCA: Loading Production data
In this video I demonstrate how to load production and well header data for use in a decline curve analysis project. The first step is to gather your data. You’ll need: Production data – this can be in CSV, Excel, or IHS 298 formats. For spreadsheet formats you’ll need columns for API, Date, Oil, Gas, Water (optional), and days of production for that period (optional). Well header data – this can be in CSV, Excel, or IHS 297 formats.
Sample data to get started
Need some sample data to get started? The files below are from data made public by the Wyoming Oil and Gas Commission. These will allow you to get started with petrophysics, mapping, and decline curve analysis. Well header data Formation tops data Deviation survey data Well log data (las files) Production data (csv) or (excel) Wyoming counties shapefile and projection Wyoming townships shapefile and projection Haven’t found the help guide that you are looking for?