Tutorial - Useful Features

This tutorial will introduce some useful features of the waLBerla framework which can make your live easier.

Checkpointing

You can checkpoint the current state of your rigid body dynamics simulation at any point to restore it afterwards. First you have to store the current domain partitioning using blockforest::BlockForest::saveToFile().

auto forest = createBlockForest( math::AABB(0,0,0,60,60,60),
Vector3<uint_t>(2,2,2), // number of blocks
Vector3<bool>(false, false, false)); // periodicity
forest->saveToFile("SerializeDeserialize.sbf");

Then you have to store the current simulation data using domain_decomposition::BlockStorage::saveBlockData().

forest->saveBlockData("SerializeDeserialize.dump", storageID);

This will store all non global rigid bodies to the file system.

To load everything again you start by creating the blockforest::BlockForest. This time you will use a different constructor.

auto forest = make_shared< BlockForest >( uint_c( MPIManager::instance()->rank() ), "SerializeDeserialize.sbf", true, false );

Instead of initializing the Storage BlockDatum like you normally would

auto storageID = forest->addBlockData(createStorageDataHandling<BodyTuple>(), "Storage");

you have to use domain_decomposition::BlockStorage::loadBlockData()

auto storageID = forest->loadBlockData("SerializeDeserialize.dump", createStorageDataHandling<BodyTuple>(), "Storage");

Unfortunately due to a misorder in the loading scheme you have to reload your coarse collision detection.

for (auto blockIt = forest->begin(); blockIt != forest->end(); ++blockIt)
{
ccd::ICCD* ccd = blockIt->getData< ccd::ICCD >( ccdID );
ccd->reloadBodies();
}

Hopefully this gets fixed in the future. ;)

Attention
This method does not save global bodies nor solver settings. You have to take care to restore these settings on your own.

A fully working example can be found in the SerializeDeserialize.cpp test of the pe module.

VTK Output

For VTK Output you have to create vtk::VTKOutput objects. To output the domain partitioning use vtk::createVTKOutput_DomainDecomposition.

auto vtkDomainOutput = vtk::createVTKOutput_DomainDecomposition( forest, "domain_decomposition", 1, "vtk_out", "simulation_step" );

To output all sphere particles use vtk::createVTKOutput_PointData in conjunction with SphereVtkOutput:

auto vtkSphereHelper = make_shared<SphereVtkOutput>(storageID, *forest) ;
auto vtkSphereOutput = vtk::createVTKOutput_PointData(vtkSphereHelper, "Bodies", 1, "vtk_out", "simulation_step", false, false);

Currently only spheres are supported for VTK output but you can easily write your own SphereVtkOutput and adapt it to the body you like.

To actually write something to disc call vtk::VTKOutput::write():

vtkDomainOutput->write( );
vtkSphereOutput->write( );

You can call this every time step if you want. The files will be automatically numbered so that ParaView can generate an animation.

Loading from Config

You can specify a config file as the first command line parameter. To access it you can use the Environment::config() function. You can access subblocks of the config with config::Config::getBlock().

auto cfg = env.config();
if (cfg == NULL) WALBERLA_ABORT("No config specified!");
const Config::BlockHandle configBlock = cfg->getBlock( "LoadFromConfig" );

To get values from the config call config::Config::getParameter():

real_t radius = configBlock.getParameter<real_t>("radius", real_c(0.4) );

Certain task already have predefined loading functions. You can for example directly create a BlockForest from the config file.

shared_ptr<BlockForest> forest = createBlockForestFromConfig( configBlock );

The corresponding block in the config file looks like:

simulationCorner < -15, -15, 0 >;
simulationDomain < 12, 23, 34 >;
blocks < 3, 4, 5 >;
isPeriodic < 0, 1, 0 >;

Also the HardContact solver can be configured directly from the config file:

BlockDataID blockDataID;
cr::HCSITS hcsits( globalBodyStorage, forest, blockDataID, blockDataID, blockDataID);
configure(configBlock, hcsits);

The config file looks like:

HCSITSmaxIterations 123;
HCSITSRelaxationParameter 0.123;
HCSITSErrorReductionParameter 0.123;
HCSITSRelaxationModelStr ApproximateInelasticCoulombContactByDecoupling;
globalLinearAcceleration < 1, -2, 3 >;

Timing

To get additional information where you application spends its time you can use the WcTimingTree. It will give you a hirarchical view of the time used. Usage example:

//WcTimingTree tt;
tt.start("Initial Sync");
syncCallWithoutTT();
syncCallWithoutTT();
tt.stop("Initial Sync");

Before you output the information you should collect all the information from all the processes if you are running in parallel.

auto temp = tt.getReduced( );
{
std::cout << temp;
}

Many build-in functions like solver or synchronization methods come with an additional parameter where you can specify your timing tree. They will then include detailed information in your timing tree.

SQLite Output

waLBerla also supports SQLite database for simulation data output. This can come in handy in parallel simulations as well as in data analysis. To store information in a SQLite database you have to fill three property maps depending on the type of information you want to store.

std::map< std::string, walberla::int64_t > integerProperties;
std::map< std::string, double > realProperties;
std::map< std::string, std::string > stringProperties;

You can then dump the information to disc. timing::TimingPool and timing::TimingTree already have predefined save functions so you do not have to extract all the information yourself and save it in the property array.

{
auto runId = postprocessing::storeRunInSqliteDB( sqlFile, integerProperties, stringProperties, realProperties );
postprocessing::storeTimingPoolInSqliteDB( sqlFile, runId, *tpReduced, "Timeloop" );
postprocessing::storeTimingTreeInSqliteDB( sqlFile, runId, tt, "TimingTree" );
}