Contact Sales
Sign Up Log In

Example Flow: Log Clean-up

FlowsJanuary 27, 2026

This article provides and overview of how to use a Flow to perform log clean-up operations. To do this, the following Flow tools are used in this order (Note, this is similar to, but not identical to the Example Flow provided in the software):

  • LogInput >> Brings the log data into the Flow
  • FixLogDepthProblems >> Fixes common inconsistencies related to depth
  • LogResample >> Resamples the logs to a common depth step
  • LogUnitConversion >> Converts all units to a common reference
  • InferSGRDLogType >> Handes curve name collisions related to guard resistivity and gamma ray curves
  • InferPorosityLogType >> Evaluates porosity curve descriptions to determine reference matrix (sandstone, limestone, dolomite)
  • InferResistivityLogType >> Evaluates resistivity curve descriptions to assign to induction or laterolog type tools
  • SPCurveBaselining >> Removes SP drift from SP curve
  • NullRepeatedLogSamples >> Removes flatspots from selected log curvess
  • RemoveAllNullLogs >> Removes any curves were all values are null
  • DeleteComputedCurves >> Handles curve name collisions
  • LogSplicing >> Splices and or depth shift curves
  • LogOutput >> Writes the curves to a new log database.

The full flow is shown below:

Aside from the LogInput and LogOutput tools all of the other tools are optional. In addition, other tools can be added as needed.

FixLogDepthProblems is always a good starting point as it handles some minor problems that can cause unexpected consequences. Additinally LogResample is highly advised as it puts everything on a consistent depth step. Although the Danomics software can handle LAS files with different steps, many other softwares are unable to do this correctly, so this eliminates that issue.

LogUnitConversion has the ability to both do conversions and rename units to a common mnemonic. For porosity curves it can also do a statistical check to evaluate if it is likely in percent or decimal format. This is done by setting the Infer porosity units option:

For each of the infer tools (InferSGRDLogType, InferPorosityLogType, and InferResitivityLogType) it is advisable to set it so skip on unknown type unless you are absolutely certain of your choice.

For the SPCurveBaselining it is advised to only run that after resampling has been performed as very small depth step increments will cause excessive run times.

For NullRepeatedLogSamples it is important to only include curves that you do not expect to be flat - don't add bit size cruves, binary flags (e.g., net_pay), or similar as those will likely be entirely nulled out.

RemoveAllNullLogs is an optional step that is useful as it will remove curves with now values so they won't show up in subsequent filtering operations.

DeleteComputedCurves is useful for clarity. If you import a curve named PhiT and Danomics calculates a PhiT, which are you seeing. Which one gets saved in the LAS file on export? Using this tool will either remove or rename those curves based on your selection.

LogSplicing is optional. It can normalize curves based on overlap and depth shift curves. However, one hidden use is that when you have a database with many identical wells it will collapse them down to a single LAS, which can make your project run much faster. In the screenshot below we are normalizing curves to one another, but we are not depth shifting.

Other tools can be added as needed. For example, it might be useful to add LogMath tools to null out non-sensical data (e.g., GR < 0).

Tips and Tricks

  • Think about the order of operations - make sure, for example, to convert units before doing anything that is unit dependent.
  • Remember that you can always update the Flow later if you notice more issues - e.g., bad GammaRays that require a LogMath.
  • After running the Flow, make sure to use the database created in the LogOutput in your petrophysical interpretation.

Tags

Related Insights

DCA

DCA: Type well curves

In this video I demonstrate how to generate a well set filtered by a number of criteria and generate a multi-well type curve. Before starting this video you should already know how to load your data and create a DCA project. If not, please review those videos. Type well curves are generated by creating a decline that represents data from multiple wells.

July 29, 2025
DCA

DCA: Loading Production data

In this video I demonstrate how to load production and well header data for use in a decline curve analysis project. The first step is to gather your data. You’ll need: Production data – this can be in CSV, Excel, or IHS 298 formats. For spreadsheet formats you’ll need columns for API, Date, Oil, Gas, Water (optional), and days of production for that period (optional). Well header data – this can be in CSV, Excel, or IHS 297 formats.

July 29, 2025
General

Sample data to get started

Need some sample data to get started? The files below are from data made public by the Wyoming Oil and Gas Commission. These will allow you to get started with petrophysics, mapping, and decline curve analysis. Well header data Formation tops data Deviation survey data Well log data (las files) Production data (csv) or (excel) Wyoming counties shapefile and projection Wyoming townships shapefile and projection Haven’t found the help guide that you are looking for?

July 9, 2025by Cameron Snow

Get a Personal Demo

Unlock the subsurface with Danomics