Skip to main content

Notice: This Wiki is now read only and edits are no longer possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

CDT/Build/Martin

< CDT

My thoughts are heavily influenced by my work on a CMake plugin for CDT. Some things are not easy to implement with the current build model and here is what I think would improve the situation:

A build consist of two steps

Well, at least two steps. With a closer look, the build is divided into two or three steps:

  • A configure step (run cmake, qmake, autotool's configure, ... to generate a Makefile )
  • The actual build (run make, ninja-build, ...)

Both steps support build configurations, so for each build configuration, both steps have to be passed. When the user imports a project from version control, it is unconfigured, so first the configure step needs to run. The configure step may be empty for the internal CDT builder or if the default host toolchain is used and all settings are known by default.

Like an incremental build, the configure step only needs to re-run when the project's structure changed.

configure step

The results of the configure step do not get checked in (not shared between users / hosts)

  • the toolchain to be used for the build is specified
    • CMAKE_TOOLCHAIN_FILE to choose a cross compiler
    • qmake's mkspecs
    • autotools' --target / --host option
  • output directory can be given
  • extra options possible
  • the configure step can inspect the system and decide about optional parts of the build
    • CMake's find_package
    • GNU autotools autoconf feature

For CMake, qmake or GNU autotools, the following information is not available before configure step has finished successfully:

  • the compiler used for the build -> needed by
    • the Built-In Settings Scanner
    • to setup the error parser for the compiler
  • Include search paths of additional libraries / a cross SDK
    • needed by the indexer
    • at least CMake can output them in form of a json file -> the indexer could parse the projects without the build, just with the information from the configure run.
  • which build tool will be used (make, nmake, ninja-build, ...)
    • to setup the build tools's error parser
    • know how to collect progress information (e.g. cmake prints [nn%])

Many of these settings a now setup in XML and stored in the .cproject file (which will get committed).

build step

The build is performed like it is now. Call make, ninja or whatever was configured and watch what happens:

  • GNU make error parser / ninja error parser / ...
  • Compiler error parser (specific to the compiler actually used for the build)
  • parse build output for scanner discovery, if not already know from configure step (compiler specific)


(possibly) deployment step

Some information needed to automatically setup the launch and the target deployment can be gathered from the build/build-system.

It is an often used practice to also implement a "make install" target in the build. Many Makefile have one, cmake and others support it, too. The result of the make install step often gives valuable information about which parts of the project need to be deployed on the target to run/debug the project. I made good experience with a setup like this:

  • I'm doing a cross development on a Linux host for an embedded target (imagine the ARM based RaspberryPi)
  • Makefile implements "make install" of a binary called myprog to /bin
  • my host's folder $HOME/target-fs is mounted on the target at /opt/myprj
  • RSE launch is set up to start /opt/myproj/bin/myprog
  • I configure "make install" as default build target, with envvar DESTDIR set to $HOME/target-fs
  • after each build, the binary will appear in that target's /opt/myprj/bin folder -> ready to launch.

Of cause, this also works with other deployment methods than NFS.

To automatically setup deployment and launch on the target, these informations can be read out of the Makefile / CMakeLists.txt / .pro file by executing its make install target into an empty folder (specified by DESTDIR variable) and see what went there. Feed the result into the target deployment or directly mount the folder on the target.

Back to the top