Release Notes

Open Inventor® 9.9 (July 2018) 

Older release notes

The following document contains the release notes for Open Inventor 9.9 (July 2018). Bugs fixed by this release can be reviewed on this page, and all important compatibility changes are listed on the Compatibility Notes.

See below the complete list of enhancements and new features included in Open Inventor 9.9


With Open Inventor 9.9 we continue the reorganization of our packages started with Open Inventor 9.8. In this new release only C++ packages have been updated with the following changes

  • Demos binaries are now under <OIVHOME>/examples/bin. Under this folder you'll find arch-xxx-Release, <xxx> depending on the platform you are using, in which all modules are stored. Note that a <arch-xxx-Debug> folder will be created if you rebuild the demo by yourself.
  • Data have also been moved and can now be found under <OIVHOME>/examples/data. Once in this folder you will find one folder per Inventor Module, each storing data associated with demos.

We are still working to make all three packages (C++, Java and .Net) as consistent as possible, meaning some more changes may be coming in future releases of Open Inventor.



Debug Native Libraries

Open Inventor 9.9 Java now includes all native debug libraries. These libraries are not used during the run time of your application and they must not be distributed. However they are useful to debug your application and to give customer support some traceback information relative to the native libraries of Open Inventor.


OSGi support

We improve the integration of Open Inventor Java in OSGi applications on Windows platform only. When the first Open Inventor object is instantiated, all necessary native dlls are now loaded automatically. Thus we suggest to remove from your code any System.loadLibrary relative to OpenInventor dlls.

However for other platforms, explicit loading of dependencies is still required in the application.

See Support of OSGi for details.

Open Inventor

Polygon simplification enhancement

We have reimplemented the algorithm in Open Inventor 9.9. Based on Surface Simplification Using Quadric Error Metrics from Michael Garland and Paul S. Heckbert at Carnegie Mellon University the new implementation offers better performances and is more reliable than before. 

Following benchmark shows the performance difference between Open Inventor 9.8 and Open Inventor 9.9 on meshes of multiple size.

Moreover, a new demo has been added to the package to illustrate how fast and reliable the new simplification algorithm is. It is located in $OIVHOME\examples\source\Inventor\examples\Features\Simplification\InteractiveSimplification. The example can be used with default meshes but allows you to load your own meshes and save, as an .iv file, the simplified mesh. 


Please refer to documentation of SoGlobalSimplifyAction, SoShapeSimplifyAction and SoReorganizeAction.


Pre-selection callback for SoExtSelection

In order to improve performance using SoExtSelection, a pre-selection callback has been added so it is now possible to "skip over" particular nodes. The SoExtSelection will traverse each node of scene graph and test intersection against each nodes. The new callback is called before doing the intersection test and can be used to avoid doing costly computation on nodes thet should not be selectable. Use the PreFilterEventArg::discardNode() method inside your callback. The same effect can be achiebed by adding SoPickStyle nodes (style = UNPICKABLE) to the scene graph, but the new callback avoids adding a large number of SoPickStyle nodes in a complex scene graph.

Progress indicator

A new class has been added to allow applications to display a progress indicator. SoProgressIndicator notifies the application when a task starts, stops or when it is in progress. For example to update a progress bar on screen.

Progress is conceptually divided into sequential tasks which are divided into subtasks, and each subtask can have multiple iteration steps. See events onBeginTaskonEndTaskonBeginSubTaskonEndSubTaskonEndStep. Each event has specific arguments: TaskEventArg, SubTaskEventArg, StepEventArg.

You can find an implementation example in following demo :

  • C++ : SimpleLightedVolume ($OIVHOME\examples\source\VolumeViz\examples\simpleLightedVolume)
  • Java : : SimpleLightedVolume ($OIVJHOME\examples\volumeviz\sample\simpleLightedVolume)
  • .NET : SoVolumeRenderingQuality ($OIVNETHOME\examples\source\VolumeViz\SoVolumeRenderingQuality)


VR Demo Under Linux

Support for Virtual Reality (VR), introduced for Microsoft Windows in Open Inventor 9.7 is now also available on Linux. It includes support for the popular HTC Vive and Oculus Rift Head Mounted Displays (HMD).  Any Open Inventor scene graph can be displayed in VR and existing applications can easily be modified to support VR rendering and interaction. You will find a basic example program and some VR helper classes in the directory $OIVHOME/examples/source/Inventor/examples/demoVR. The helper classes use head tracking to update the camera, render the Open Inventor scene graph in left and right eye views, and transfer the rendered images to the VR device. They also convert controller events (like button press) and controller position tracking to the standard Open Inventor VR event classes (see for example SoTrackerEvent).

Please note that the VR classes require OpenVR.  A prebuilt library is included with Open Inventor but you may want to install and build it yourself.  You will need to install SteamVR.


Image Stack projection

The SoImageStackProjectionProcessing3d engine creates a single image using the best pixels from an input image stack. This feature is particularly useful when an acquisition device does not allow to grab an image focused in all areas of the acquisition field. For each pixel, the SoImageStackProjectionProcessing3d engine parses all images of the stack and selects the value offering the best contrast in order to build a resulting image focused everywhere. This engine can also produce higher quality results than using the standard Maximum Intensity Projection (MIP) rendering in VolumeViz.

First image of the stack Second image of the stack Result of Image Stack Projection
(in Gradient mode)

Structure enhancement filter

The new engines SoMultiscaleStructureEnhancementProcessing2d and SoMultiscaleStructureEnhancementProcessing3d compute a score between 0 and 1 for each pixel, 1 representing a good matching with a structure model and 0 a background pixel. This provides a powerful trchnique for automatically identifying structures such as blood vessels. The size of the structures to enhance is defined by a scale range parameter. For instance, it is possible to reveal different diameters of rod structures by adjusting this range. The score can be computed either on an Hessian matrix to detect ridge structures or on a Gradient tensor for object edges and corners. The structure models available are:

  • Balls (circular structures with Hessian in 2D, spherical in 3D, object corners with Gradient tensor)
  • Rods (linear structures)
  • Plates (only available in 3D)

The following publication describes this algorithm when applied to detect Rod structures with the Hessian matrix: A.F.Frangi, W.J.Niessen, K.L.Vincken, M.A.Viergever, "Multiscale vessel enhancement filtering", Lecture Notes in Computer Science(MICCAI), vol. 1496, pp. 130-137, 1998.

Fundus photograph of the left eye * Multiscale structure enhancement filter
based on Hessian matrix for enhancing Rod structures.

* Häggström, Mikael (2014). "Medical gallery of Mikael Häggström 2014". WikiJournal of Medicine 1 (2). DOI:10.15347/wjm/2014.008. ISSN 2002-4436. Public Domain.

Local adaptive thresholding

The new engine SoLocalAdaptiveThresholdProcessing performs a binarization by applying a threshold that is automatically adapted relative to the mean intensity value of a sliding window.

Grayscale input image Attempt of binarization with a global threshold Result of a local adaptive thresholding, retaining pixels lower than 90% of the local mean
(in Gradient mode)

Local thickness map optimization

The SoLocalThicknessMapProcessing3d engine proposes a new parameter PrecisionMode which allows to choose between a precise and a fast computation mode. The former engine always used precise mode.

The fast mode is especially useful when the input image contains some thick structures which are very long to compute. The resulting image is a bit less precise on the object boundaries. These artifacts are not damaging since this information is generally not relevant on object borders.

Binary input image (100x100x100) Thickness map in precise mode
(46 sec.)
Thickness map in fast mode
(5 sec.)

Medical Content

3D volume "unfolding" based on a surface (preview feature)

Important note: the features described in this section are proposed as preview version. This means that the API’s of these features and their behavior are subject to change in the next Open Inventor version.

A set of engines is now available for creating a mesh approximating a surface in a 3D volume and then extracting the corresponding voxels in as a new volume. Volume unfolding is an essential technique for creating a dental “panoramic” image from 3D volume data. For material science and other markets, unfolding can be used to create a “flattened” view of the voxels corresponding to any cylinder, spiral or curved path in the input volume.  In some cases, particularly for a  dental panoramic, unfolding based on a surface which is simply extruded from a curve does not give the highest possible image quality. Therefore a surface fitting engine is provided that automatically adjusts the vertices of an input surface to follow the shape of the structures in the 3D volume.  In the dental panoramic case, the fitted surface corresponds to the shape of the patient’s teeth, jaw and facial bones.

  • SoPolylineResamplerApproximation3d to resample a given polyline with a fixed segment size. For example, to convert a polyline created interactively by the user into a regularly sampled smooth curve based on spline fitting.
  • SoPolylineExtrusionApproximation to create a quadrilateral mesh surface by extruding an input polyline in the specified direction.
  • SoSurfaceFittingApproximation3d to fit a mesh with a cost function given in a volume, e.g. representing the distance to the outside of a wall
  • SoSurfaceUnfoldingProcessing3d to resample a volume by extracting voxels within a specified distance from a mesh surface and create a new volume based on a “flattened” version of the surface

The unfolding can be tested with the new MedicalDentalSurfaceUnfolding demo added in folder $OIVHOME/examples/bin/arch-<xxx>-Release/Medical/Dental in the Open Inventor package.

Automatic Panoramic

Added in Open Inventor 9.9.4, the DentalPanoramicExtractor new class proposes a high level API dedicated to automatic dental panoramic extraction from a 3D CBCT volume (based on engines listed above). Contrarily to the already existing low level API, this new class allows to build a dental panoramic with only few line of codes.

The panoramic extractor can be tested with the new medicalDentalAutomaticPanoramic demo added in folder $OIVHOME/examples/bin/arch-<xxx>-Release/Medical/Dental in the Open Inventor package (only C++).



MeshVizXLM .NET end of support

Open Inventor 9.9 will be the last version including the .NET version of MeshVizXLM. Starting from here, source will be frozen and no new feature will be added to this module in its .NET version. Starting with Open Inventor 10.0, MeshViz XLM .NET is no longer available.

Please note  MeshVizXLM can still be used in .NET applications through the MeshVizXLM C++ API.

Also note that the MeshVizXLM Java API is still supported. We have made this change because, unlike other components in Open Inventor which are simply “wrapped”, MeshVizXLM is natively implemented in each target language.  Using MeshVizXLM C++ for both C++ and .NET applications allows us to spend more tuning and optimizing performance.

Combining data sets (C++)

Until Open Inventor 9.9 MeshViz XLM was only able to take into account one dataset to map a value into a color (single color mapping). This new version introduces the capability to combine several dataset into colors. In order to combine several datasets a new class MoCombineColorMapping has been added to the public API. 

In order to explain how to use the new feature, you can start with the tutorial example provided in OIVHOME\examples\source\MeshVizXLM\mapping\tutorials\ColorMapping\Combine

MeshViz XLM used to support only one color mapping: the dataset that was used to map a value into a color was defined by the field MoMeshRepresentation::colorScalarSetId. This field's value represents the occurence of the dataset node inherited in the scene graph during a traversal (0 means that the skin uses the first node MoScalarSet* found). This behaviour is still fully supported in this new version. However, as the combine color mapping uses several scalar sets, the value of the field colorScalarSetId is ignored by any mesh representation when using a MoCombineColorMapping .

Actually we can describe how MeshViz selects the list of dataset used to color any representation of a mesh in this way

  • If a single color mapping is used (any class inherited from MoColorMapping), the dataset used to compute color is selected according the field colorScalarSetId 
  • if a combine color mapping (MoCombineColorMapping ) is used , the representation class (MoMeshSkin for instance) will search for all MoScalarSet objects found in the state during the scene graph traversal. Thus a list of scalar set is built, and a sub set of this scalar sets are extracted. Then for each vertex or polygon extracted, the list of scalar values extracted are combined into a single color by using the MiColorMapping<std::vector< double >>::getColor method.

Known limitation: The MoLegend class supports only single color mapping, so it logically ignores the MoCombineColorMapping class.

Curvilinear mesh performance improvement (C++)

Thanks to a new multi-threaded implementation of internal algorithms, the time needed to extract a skin, a slab, or a slice on a curvilinear volume mesh has been drastically reduced in Open Inventor 9.9.
(Reminder: A curvilinear mesh can be used to efficiently represent a grid in which all adjacent cells share their corner points. For example, a reservoir model that does not contain “faults”.)

The first chart below shows the performance improvement for the first extraction (i.e. the one building internal caches required to handle active cells). It shows different use cases on a grid containing 50 millions cells. Please note this benchmark has been run on a 12 core CPU.

The following chart shows that even better improvements occur for subsequent extractions (subsequent extractions are done when a parameter of the rendering has changed like new position of a slab, new filtering criteria, etc.).

Finally, the chart available at this address shows that even memory consumption has benefited from this new algorithm implementation.

MoMeshVector improvement (C++ / Java)

The class MoMeshVector has the new capability to shift each vector of the vector field representation along the direction of the vector. A typical use case is to allow either to render each vector as pointing into the target point (usually the mesh node or the center of each cell), or to render each vector starting from the target point.

For this, a new field MoMeshVector::shiftFactor has been added to define the factor of shift. It is relative to each vector direction. The following image illustrates the effect of shiftFactor (3 different values are shown) on a vector field defined PER_CELL (target points are the center of each cell)

shiftFactor = 0 shiftFactor = -1 shiftFactor = -0.5


  • An H.264 encoder using OpenH264 library (CPU encoding) has been introduced in this new version of RemoteViz. Compared to the existing Nvidia video encoder (NvEnc), this new encoder does not require any specific hardware to be used. The demo $OIVHOME/examples/source/RemoteViz/HelloConeH264 has been updated to show how to implement a fallback from NvEnc to OpenH264 encoder.

  • The class BandwidthSettings has been removed and replaced by the class NetworkPerformance. The new class NetworkPerformance contains the same methods as the old class BandwidthSettings except for the method setBandwidth() that has been removed: The client bandwidth value can now be modified by using the existing method ClientSettings::setBandwidth().
  • TypeScript is an open-source programming language developed and maintained by Microsoft. It is a strict syntactical superset of JavaScript, and adds optional static typing to the language. A TypeScript declaration file is provided in order to use the RemoteViz client API in a TypeScript application (see $OIVHOME/remotevizHTML5/RemoteVizClient.d.ts).

    In addition to the existing JavaScript web client of the RemoteViz example ClientWorkBench, a web client coded in TypeScript and using the framework Angular has been added (see $OIVHOME/examples/source/RemoteViz/ClientWorkbench/Clients/HTML5/TypeScript-Angular)

  • In RemoteViz 9.8, the frame quality was calculated from the FPS (frames per second) and the bandwidth defined for the connection. The drawnback of this calculation is that when the bandwidth of the connection is modified, the only variable parameter is the frame quality. FPS value is always considered as constant.

    In RemoteViz 9.9, we introduced new policies for FPS and frame quality calculation:

    • KeepFrameQualityPolicy (default behavior): This policy privileges the frame quality over the FPS. The frame quality is kept constant as possible, the adjustable parameter is the FPS.
    • KeepFramesPerSecondPolicy: This policy privileges the FPS over the frame quality. FPS is kept constant as possible, the adjustable parameter is the frame quality.

The policy used by a connection can be defined using the method ConnectionSettings::setFrameEncodingPolicy.