6Code maintenance

6.1Poor man’s backup

There are many great backup programs, but you can actually get far with the free stuff already on your system. The modern cloud-based backup programs can—in my experience—be immature, dictating unreasonable requirements to your disk-structure. An older trusted solution is given here. On Windows, “robocopy” is very persistent in its attempts to copy your data. The following text in a BAT file (as a single line) copies “MyProject” to a Google drive—it could also be a network drive, etc.

Listing 6.1: Poor mans backup using Robocopy

Table 6.1 describes the options used—there are more.

Table 6.1: Robocopy parameters.

Option Usage
/s Include subfolders
/XO Exclude files older than destination
/R:10 10 repetitive attempts if it fails
/LOG Logfile – and LOG+ appends to file
/NP No percentage – do not show progress
/FFT Assume FAT File Times (2-second granularity)

In Windows, you can schedule the above BAT file to be executed daily:

  1. Open control panel
  2. Select administrative tools
  3. Select taskscheduler
  4. Create basic task
  5. Follow wizard. When asked how often—say once a week. And then check-mark the relevant days in the week.
  6. If you are copying to a network drive make sure that the backup only tries to run when you are on the domain and have AC power. Do not require the PC to be idle.
  7. Select a time when you are normally at lunch or otherwise not using your PC.

The faithful Linux equivalent to the above, is to use the “tar” command to copy and shrink the data and “crontab” to setup the “cron” daemon, scheduling the regular backup. Contrary to the above Windows concept, this is described in numerous places and is skipped here.

There is a general pattern for scheduling jobs in both the above procedures: Divide the code in what and when.

6.2Version control—and git

There is a lot of hype surrounding the use of version control. The most important thing, however, is that some kind of version control is indeed used. The second priority is that check-ins are performed daily, and that when the project is in the stabilization phase, single bug-fixes are checked in atomically with reference to a bug-number. This is important because the most error-prone type of code is bug-fixing. We want to be able to:

  1. Make a “shopping list” of all bug-fixes introduced between version A and version B. In business-to-business, most customers are mostly in a state where they want the exact same version as yesterday, only with a particular bug-fix. This is not very practical for the developers, as it is rarely the same bug-fix the different customers want. The best thing we as developers therefore can do, is to document exactly which bug-fixes are in the bag. When this is possible, it may trigger “ah, yes I would actually like that fix, too,” and at least it shows the customer that we as a company know what we are doing, and what we are asking him or her to take home. Working with consumer products does not give the above customer requirements, however, the process still makes sense. If you have to produce a bugfix-release here and now, you may still want to find something with the minimum change. This requires less testing for the same amount of confidence.
  2. Be able to answer exactly which version the customer needs in order to get a specific bug-fix. This relates to the above—a minimal change-set.
  3. Know which bug-fix disappears if something is rolled back due to new bugs or unwanted side effects.
  4. Know that any rollback will roll back full fixes, not just a parts of these.
  5. Make branches and even branches on branches, and merge back. Manual assistance may be necessary with any tool.

On top of the above, there are more “nice-to-have” requirements such as a graphic tool integrated into Windows Explorer, integration with IDEs, etc.

In principle, all the above requirements are honored by “old-school” tools such as cvs, Subversion and Perforce. However, if you work with modern software development, you know that today git is it. The major challenge with git is that it has a philosophy quite different from the others. Git has a number of great advantages, at the cost of increased complexity. If you work with Windows software, or a completely custom embedded system, you and your team17 can probably choose your tool freely, but if you work with Linux or other open source, it is not practical to avoid git. Most likely, you will love it in the long run, but there can be moments of cold sweat breaking before you get to that point.

Up until git, there was a central repository, often known as a repo, containing all the code, and developers could checkout whichever part they needed to work with. So either your code was checked in—or it wasn’t. With git the repo is distributed, so that everybody has a full copy. You commit code from your working directory to your local repo, or checkout code the other way. But you need to synchronize your repo with a push to the central repo, in order to share code with other developers. You could say all repos are created equal, but one is more equal than the others; see Figure 6.1.

Figure 6.1: Main git actions.

Some major benefits of this concept is that it is lightning fast to commit or checkout, and a server meltdown will probably not lead to a disaster. Another benefit with git is that you can stage which parts of your changed code you want to commit. As stated, it is extremely important that bug-fixes or features are checked in atomically, so you either get the full fix, or nothing, when you later checkout the source. You may have tried the scenario where you are working on a feature and then asked to do a small and “easy” fix. The actual code change is quite simple, but you cannot commit all the now changed code without side effects. So you need to only commit the changes relating to the new bug-fix. Git introduces a “staging area” aka index. This is a kind of launch ramp onto which you place the source files, or maybe even just parts of source files, you want to commit. This is great, but it does introduce one more level between your working directory and the shared repo.

Finally, git also introduces a so-called stash. This is a kind of parking lot, where you can park all the stuff you are working on while you briefly work on something else, like the fast bug-fix. If you have a newer version of a work-file you do not want to commit, it can obstruct a git pull from the remote repo. You can go through a merge, but instead it may be easier to git stash your workdir, do the git pull of the remote repo, and finally do a git stash apply bringing back your newer workfile. Now you can commit as usual, avoiding branches and merges. Basically, this is close to the standard behavior of, for example, Subversion.

Getting the grips of git is thus not so simple, but it is an investment worthwhile. John Wiegly has written a recommendable article “Git from the bottom up.” He explains how git is basically a special file system with “blobs” (files) that each has a unique “safe” hash,18 and how commits are the central workhorse, with branches being a simple consequence of these. John does some great demos using some scary low-level commands, showing the beauty of the inner workings of git. The scary part of these commands is not so much his usage, but the fact that they exist at all.

An extremely nice thing about git is that renaming files is simple. With other version tools, you often ponder that you ought to rename a file, or move it up or down a directory level, but you shy away from this, because you will lose history, and thus make it difficult for others to go back to older versions. Git’s git mv command renames files. This command does all that is needed on the local disk as well as in the index/stage. Likewise, git rm removes files locally and in the index. The only thing left to do in both cases, is to commit. The history still works as expected.

There are a number of graphic tools for git, but you can go far with gitk which works on Linux as well as on Windows, and comes with a standard git installation. It is “view-only” which can be a comfort, not so easy to do something terrible. An interesting part of the standard installation on Windows is a bash shell. Like gitk this may be fired up via a right-click in Windows Explorer. This is a good place to practice the Linux command shell before you take the bigger leap to a real or virtual-machine Linux. It is perfectly possible to use git inside a normal Windows command shell, but it is a little weird to use “DOS” commands for standard file operations and use “rm,” “mv,” etc. when commands start with “git.” Table 6.2 shows some of the most frequently used git commands.

Clearly, it is easy to make a repo. Therefore, it makes sense to make many smaller repos. It is almost just as easy to clone yet another repo as it is to get a zip file and unzip it. This is an important difference from the old version-control systems.

Table 6.2: Selected git commands.

Command Meaning
git init Creates “.git” repo dir in the root of the workdir it is invoked in
git init --bare Creates the repo without workdir files
git clone <URL> Create local repo and workdir from remote repo
git status Where are we? Note that untracked files will confuse if .gitignore is not used well
git add <file> Untracked file becomes tracked. Changed files are staged for commit
git commit <file> File committed to local repo. Use -m for message
git commit -a All changed, tracked files committed – skipping stage. Use -m for message
git checkout Get files from local repo to workdir
git push Local repo content pushed to remote repo
git fetch Remote repo content pulled to local repo – but not to workdir
git merge Merge the local repo and the workdir – manually assisted
git pull As fetch, but also merges into workdir
git rebase Detach chain of commits and attach to another branch. Essentially a very clean way to merge
git stash Copy workdir to a stack called “stash” (part of local repo but never synched remotely)
git stash apply Copy back the latest workdir stashed

So how does git know which remote URL to use with which repo? In the best Linux tradition, there is a “config” file in the “.git”19 directory. This contains the important remote URL, as well as the name of the current branch on the local and remote repo, instructions on whether to ignore case in file names (Windows) or not (Linux), merge strategy, and other stuff. Anywhere in the tree from the root of the workdir, git finds this file and is thus aware of the relevant context. This isolates the git features within the working dir, and thus allows you to rename or move the workdir without problems. It remains functional, referring to the same remote repo.

It is often stated that “git keeps the full copy of all versions.” This is not entirely correct. The local repo is a full and valid repo and, therefore, can reproduce any version of a file. At the lower storage level, it stores a full copy of HEAD, the tip version. However, git runs a pack routine which stores deltas of text files. In this way, you get the speed of having it local, and still you have a compact system.

The problem comes with binary files. It is never recommended to store binary files in a version-control system, but it can be very practical and often is not really a problem. As with other systems, git cannot store the binary files as the usual series of deltas, each commit stores the full copy.20 The difference from other systems lies in the full local copy of the repo. If you use, for example, Subversion, you typically checkout the tip (HEAD) of a given branch, and thus you only get one copy of the binary files. With git you get the full chain of copies, including all versions of binary files in all branches. Even if the files are “deleted,” they are still kept in order to preserve the commit history.

There are commands in git that can clean out such binary files, but it is not a single, simple operation (search for git “filter-branch”). There are also external scripts that will assist in deleting all but the newest of specific files. The actual reclaiming of space does not happen until a garbage collect is run with git gc. If you are using, for example, “BitBucket” as your remote repo, however, the final git push to this will not be followed by a garbage collection until the configured “safety time,” usually 30 days, is passed. Instead the remote repo will take up the old space plus the new. If you performed a cleanup because you were close to the space limit, this is really bad news.

For this reason, you should always take the time to fill out the .gitignore file in the root of the workdir. It simply contains file masks of all files which git should ignore. This is a good help in avoiding the erroneous commit of a huge PNG or debug database. A nice feature of this file is that you may write, for example, “*.png” in a line, and further down write “!mylogo.png.” This specifies that you generally do not want png files, except for your own logo file.

6.2.1GitHub and other cloud solutions

Git itself is an open source tool that came with Linux, initially written by Linus Thorvalds. As stated earlier, it can be confusing and complex to use—partly because there are so many ways to do things. This challenge is to a degree handled by companies such as GitHub.

This Microsoft acquired company offers free storage to minor repos, and paid storage for the bigger ones. Nevertheless, GitHub probably owes a good part of its success to the process it surrounds git with. By using GitHub, you subscribe to a specific workflow that is known and loved by a lot of developers.21

Basically, any feature implementation or bugfix starts with a branch, so that the master remains untainted. Now the developer commits code until he or she is ready for some feedback. A pull request is issued to one or more other developers. They can now look at the code and pull it into their branch (which can even be in another repo). Comments are registered and shared.

Now the branch may be deployed to a test. If it survives the test, the branch is merged into the master.

Atlassian has a similar concept. Here, you can setup the system so that when a developer takes the responsibility for a bug created in Jira (see Section 6.6), a branch is automatically created. Still from within Jira, a pull request can be issued for feedback, and when a positive check-mark is received, a merge is done.

6.3Build and virtualization

Virtualization is a fantastic thing. Many developers use it to run Linux inside their Windows PC. This type of usage was probably also the original intended one, but today it also makes a lot of sense to run a “Guest OS” inside a “Host OS” of the same type. When working professionally with software, FPGA, or anything similar that needs to be built, it is important to have a “build PC” separate from the developers personal computers. It has the necessary tools to perform the build(s) and distribute the results and logs, but no more.

Most development teams today use a repository—git, Subversion, Perforce, or whatever (see Section 6.2). Typically, this only covers the source, not compiler, linker, and other binary tools, the environment settings, library folders, etc. Realizing how much time it can take to set up a build system that really works, it makes sense to keep this whole setup as a single file. This can be rolled out at any time and used, with all the above mentioned files completely unchanged. Something built today may otherwise not be able to build on a PC two years from now. Preserving the complete image of the build PC over time is one good reason to virtualize it.

Sometimes even the developers may prefer to use such a virtual PC. The best argument against this is performance. However, if this is not something built daily but, for example, a small “coolrunner” or similar programmable hardware device, changed once in a blue moon,22 then dusting of the virtual image really makes sense. It is amazing how much a standard developer PC actually changes in a year. The build machine in the virtual PC must have all kinds of silent changes, like Windows Updates, turned of. The following “manual” assumes that you run Linux as the guest OS on a Windows host.

The best known tool for Virtualization is probably VMWare, but the free Oracle VirtualBox also works great. Both VMWare and Oracle today integrate seamlessly with standard Linux distros as Ubuntu and Debian, when it comes to mouse, clipboard, access to local disk, etc. However, if you are using USB ports for serial debuggers, etc., you may find that you need to tell the virtual machine that the client Linux is allowed to “steal” the relevant port.

To get started, you download the Oracle VirtualBox or a VMWare system and install it. Next, you download an ISO image of the Linux system. Typically, there are more discs, just go for the simplest, you can always download packets for it later. From VirtualBox or VMWare, you now “mount” the ISO on the nonphysical optical/DVD drive. You can also choose the number of CPU cores and memory you want to “lend out” to the client system when it runs.

When all this is done, you start the virtual machine. Now you see Linux boot. It will ask you whether it is okay to wipe the disc. This is okay, it’s not your physical disc, just the area you have set aside for Linux. There will also be questions on regional settings. These can be changed later, not a big deal if they are not set correctly. Finally, you can run Linux. After the initial try-outs, you probably want to go for the packet manager to install compiler, IDE and utilities. You should also remember to “unmount” the ISO disk. If not, the installation starts over next time you “power-up.” Figure 6.2 shows Ubuntu Linux running inside Oracle VirtualBox.

Figure 6.2: Oracle VirtualBox running Ubuntu.

6.4Static code analysis

Using static code analysis in the daily build is a good investment. There is no need to spend days finding something that could be found in a second by a machine at night, while you sleep. Often these static tools can be configured to abide to, for example, the rules of MISRA (the Motor Industry Software Reliability Association). Figure 6.3 is from a tool called “PREfast” within the Windows CE environment.

Figure 6.3: Prefast in Windows CE – courtesy Carsten Hansen.

If a tool like this is included in a project from Day 1, we naturally get more value from it. Having it on board from start also enables the team to use the tool instead of enforcing rigid coding rules. An example is when we want to test that variable a is 0 and normally would write:

if (a == 0) but instead write:

if (0 == a)

The first version is natural to most programmers, but some force themselves to the second, just to avoid the bug that happens if you only write a single equals sign. We should be allowed to write fluent code and have a tool catch the random bug in the nightly build.

6.5Inspections

Another recommendation is to use inspections, at least on selected parts of the code. Even more important, inspect all the requirements and the overall design. Even when working with agile methods, there will be something at the beginning. The mere fact that something will be inspected has a positive influence. On top of this comes the bugs found, and on top of this, we have the silent learning programmers get from reading each others code. It is important to have a clear scope. Are we addressing bugs, maintainability, test-ability, or performance?

The most important rule in inspections is to talk about the code as it is—there “on the table.” We say “...and then the numbers are accumulated, except the last.” We don’t say “...and then he accumulates the numbers, forgetting the last.” Programmers are like drivers; we all believe we are among the best. Criticizing the code is one thing, but don’t criticize the programmer. These are the main rules on inspections:

Inspections are done with at least three people. They have roles: moderator, reader, author, and tester. The moderator facilitates the meeting, including preparations. He or she assures that people obey the rules during the meeting and are prepared. The reader is responsible for reading the code line-by-line, but rephrased to what it does in plain language. The author may explain along the way. In my experience, the tester role is a little vague.

During the inspection, the moderator takes notes of all findings, with description and page- and line-number. Errors in comments and spelling of, for example, variables are noted as minor bugs.

All participators must prepare by reading the code or spec first. This is typically handed out on print. Do not forget line numbers. It is convenient to prepare with code on the screen, allowing the inspectors to follow function-calls, header-files, etc., but to write comments on the handouts, bringing these to the meeting. Expect to use two hours to prepare and two hours to inspect 150–250 lines of code. It is possible to inspect block diagrams or even schematics in the same way.

Have a clear scope. Typically, any code that does not cause functional errors is fine, but you may decide beforehand to inspect for performance, if this is the problem. The scope is very important to avoid religious discussions on which concept is best.

Inspect code against a code guideline. A code guideline steers the inspection free of many futile discussions. If, however, you are using open source—adding code that will be submitted for inclusion in this open source, you need to adhere to the style of the specific code.

End the small quality process by looking at the larger: what could we have done better? Should we change the guideline?

6.6Tracking defects and features

Tracking bugs, aka defects, is an absolute must. Even for a single developer, it can be difficult to keep track of all bugs, prioritize them, and manage stack dumps, screen shots, etc., but for a team this is impossible without a good defect tracking tool. With such a tool, we must be able to:

Enter bugs, assign them to a developer and prioritize them.

Enter metadata such as “found in version,” OS version, etc.

Attach screen dumps, logs, etc.

Assign the bug to be fixed in a given release.

Send bugs for validation and finally close them.

Receive mails when you are assigned—with a link to the bug.

Search in filtered lists.

Export to Excel, or CSV (comma-separated-values), easily read into Excel.

As described in Section 6.2, when the updated code is checked into the software repository, this should be in an “atomic” way. This means that all changes related to a specific bug are checked in together, and not with anything else.

It is extremely important that we can link to any bug from an external tool. This may be mails, a wikki or even an Excel sheet. Each bug must have a unique URL.23 This effectively shuts out tools requiring users to open a database application and enter the bug number.

Figure 6.4 shows a fairly basic “life of a bug.” The bold straight lines represent the standard path, while the thinner curved lines are alternative paths.

Figure 6.4: Simple defect/bug workflow.

A bug starts with a bug report in the “new” state. The team lead or project manager typically assigns the bug to a developer. The developer may realize that it “belongs” to a colleague and, therefore, reassigns it. Now the bug is fixed. Alternatively, it should not be fixed, because the code works as defined, the bug is a duplicate, or is beyond the scope of the project. In any case, it is set to “resolved.” If possible, it is reassigned to the person who reported the bug, or a tester. This person may validate the decision and close the bug, or send it back to the developer.

This is not complex, but even a small project will have 100+ bugs, so we need a tool to help, and it should be easy to use.

If planned features and tasks are tracked in the same way as bugs, this can be used to create a simple forecasting tool. Figure 6.5 is an example of this.

Defects and tasks are exported from the defect-tracking tool, for example, every two weeks. Each export becomes a tab in Excel, and the sums are referenced from the main tab. “Remaining Work” (aka ToDo) is the most interesting estimates. However, “Total Work” is also interesting as it shows when the team has changed estimates or taken on new tasks. Excel is set to plot estimated man-days as a function of calendar time. Since exports from the tool does not take place at exact intervals, it is important that export dates are used as the x-axis, not a simple index.

Figure 6.5: Forecasting a project—and taking action.

Excel is now told to create a trendline, which shows that the project will be done late July 2017. Now, suppose Marketing says this is a “nogo”—the product is needed in May. We can use the slope of the trendline as a measure of the project “speed,” estimated man-days executed per calendar day, and draw another line with this slope, crossing the x-axis at the requested release date.24 It is now clear for all that to get to that date, the project must cut 200 man-days worth of features. This is sometimes perfectly possible, sometimes not.

A graph of the same family is shown in Figure 6.6. It is from a small project, showing bugs created and resolved as function of time like before. This time it goes upwards and is therefore a “BurnUp chart.”

The distance between the two curves is the backlog. This graph is generated directly by “Jira,” a tool from the Australian company “Atlassian.” It integrates with a wikki called “Confluence.”

Finally, you may also create a “BurnUp Chart” based on simple time registration. Clearly, spending time does not assure that the job gets done. On the other hand, if less time than planned is spent in a project, it clearly indicates that the team is doing other things. This may be other projects competing for the same people. This is a clear sign for upper management to step in, and is the reason why we inside the project may prefer the BurnDown chart, or the first type of BurnUp chart, while it makes sense to use the latter BurnUp chart when reporting upwards.

Figure 6.6: Incoming bugs versus fixed bugs in Atlassian Jira.

Another recommended tool is the open source “Trac.” This can be downloaded at: trac.edgewall.org. This tool can do all of the listed requirements, and is integrated with a wikki and a source code repository. You can choose between CVS, SVN (Subversion) and git—with bugs known as “tickets.” I used this in a “previous life” and it worked very well in our small team. It runs on top of an Apache server, which runs on Linux as well as on Windows.

6.7Whiteboard

With all these fancy tools, let’s not forget the simple whiteboard. This is one of the most important tools, almost as important as an editor and a compiler. Surely, a whiteboard is indispensable in the beginning of projects when regular design brainstorms are needed.

In my team, if someone is stuck in a problem we call a meeting at the whiteboard. This is not the same as a scrum meeting because it takes longer time, involves fewer people and is more in-depth, but it may have been triggered by a scrum meeting or similar. We always start the meeting with a drawing. The person who is faced with the problem draws the relevant architecture. During the meeting other people will typically add surroundings to the drawing, and we will also come up with a list of actions on the board, typically of investigative nature. With smartphones it is no problem to take a picture and attach this to a mail as a simple minutes.

It is important that the team members themselves feel confident in the setup and will raise a flag when they need such a meeting. Should this not happen it falls back on the team leader to do so.

6.8Documentation

Documentation needed years from now, belong in, for example, Word files in a repository that can manage versions and access rights. This may even be necessary for ISO 9001 requirements, etc.

Apart from the more formal documents, most projects need a place to collaborate. As described in Section 6.6, it is not uncommon to find tools that integrates a wikki with a bug-tracking system. This is nice, but not a must. As long as any bug can be referenced with a URL, you can link to it from any wikki.

6.9Yocto

Yocto is a neat way to generate a customized Linux system. As discussed in Chapter 3, a modern CPU system is like a set of Russian dolls with parts from various vendors—Bluetooth Module, flash memory, Disc, SoC CPU, Ethernet MAC, USB controller, etc. There are drivers for Linux for many of these, from the vendors and from open source users. These must be combined with the best possible version of the Linux kernel, U-Boot, cross-compilers,25 debuggers, and probably also with applications such as ssh, web server, etc.

If you buy a complete CPU board from a renown vendor, it comes with an SDK (Software Development Kit). This SDK may not contain every feature you need, even though you know it’s out there, and it may not be updated as often as you would like it to be. If you create your own board, it is even worse; there is no ready-made SDK for you. This may not be so much different from the “old days” with a small kernel, but you probably chose Linux exactly to get into the ecosystem with all of these drivers, applications, and tools. The number of combinations of all these factors is astronomic, and it is a big task to assemble your special Linux installation from scratch.

On the internet, you can find many “recipes” for how to assemble such a system, always with “try version y if version x does not work,” etc. So even though you can glue together something that works, the process is very nondeterministic, and even a small update of a single component can trigger a redo of the whole trial-and-error process.

This is the reason for the Yocto project. Based on the work done in the OpenEmbedded organization, a number of vendors and volunteers have organized an impressive system of configuration scripts and programs, allowing you to tailor your own Linux system for any hardware. Yocto is using a layered approach where some layers pull code from official sites, while you may add other layers containing your code. The layers are not exactly architectural layers as described in Section 4.3. The top layer is the “Distro”—the distribution. Yocto supplies a reference distro—currently called “Poky,” which may be used “as is,” or replaced by your own. Another layer is the BSP (board-support-package) that addresses the specific hardware. Figure 6.7 shows a sample system.

Figure 6.7: Yocto layers – sample.

Each layer may append or prepend various search paths as well as add actions to the generic workflow—mainly executed with the bitbake tool with the following main tasks:

Fetch
It is possible to configure your own local source mirror. This assures that you can find the software, also if it is (re)moved from a vendor’s site. This saves bandwidth when many developers are working together, but also means that you control and document which open-source software is actually used. In an extreme situation, you might be accused of violating an open-source license, and in this case it is an advantage to be able to reproduce the source of the source.

Unpack
Sources may reside in a git repository, a tarball, or other formats. Naturally, the system can handle the unpacking needed.

Patch
Minor bugfixes, etc. to official software may be done until they are accepted “upstream.”

Configure
The Linux kernel, busybox, and others have their own configuration programs, typically with semigraphical interfaces. The outcome of such a manual configuration can be stored in a file, and used automatically next time. Even better, the difference between the new and the default config is stored and reused.

Compile
Using the correct cross compiler with the correct basic architecture, 32/64-bit, soft or hard floating points, etc., the generic Linux source is compiled. The output is a complete image—with file system, compressed kernel image, boot-loader, etc. Yocto enforces license checks to help you avoid the traps of mixing open source with custom code.

QA
This is typically unit tests running on the build system. In order to run tests on what is typically a PC system, it is perfectly possible to compile code for a target as well as for the host itself, aka “native code.” Not all target code may be run on the build machine without hard-to-create stubs, but as shown in Section 4.4, it is often possible to cut some corners and use pregenerated input data. An alternative to a native build, is to test the cross-compiled applications on the host machine using QEMU; see Section 5.1.

Install
The output from the build process is stored as needed. This may include the generation of “packages” in the various formats (RPM, DEB, or IPK), tar-balls or plain directory structures.

As great as Yocto is, it is definitely not a walk in the park. One of the most important features is therefore the ability to create an SDK. This allows some programmers to work with Yocto and low-level stuff, while others can develop embedded applications on top of the SDK, maintained by their colleagues. Alternatively, a software house that specializes in tailoring Yocto distros creates an SDK for your team.

We have a small dedicated team working on the custom hardware and basic Linux platform. They use Yocto to generate and maintain this. Yocto generates an SDK that they supply to the application team(s). This SDK contains the selected open-source kernel and drivers as well as the customized “Device Tree Overlay” and true custom low-level functionality. In this way the application programmers do not need to bother with Yocto or to experiment with drivers. They can focus on applying their domain knowledge, being productive with the more general Linux tools.

6.10OpenWRT

Before the days of BeagleBone and Raspberry Pi, a single hobbyist developer could play around with Linux on his or her PC, but it was not easy to get your hands on embedded hardware to play with. In the 2004 August edition of Linux Journal, James Ewing describes the Linksys WRT54G Wi-Fi router. Through a bug, it was revealed that it runs on Linux. It was based on an advanced MIPS platform with plenty of flash space left unused. This was a very attractive platform to play with, and in the following months and years a lot of open-source software was developed.

The OpenWRT organization stems from this adventure. Here, software is developed for numerous Wi-Fi routers and other embedded Linux-based devices. End users can go here and look for open Linux firmware for their device, and developers may use it as inspiration. It is not uncommon to find better firmware here than what is supplied by the vendor of the device.

6.11Further reading

John Wiegly: Git from the bottom up
https://jwiegley.github.io/git-from-the-bottom-up
This is a great open-source 30-page article describing the inner-workings of git.

Otavio Salvador and Daiane Angolini: Embedded Linux Development with Yocto Project
An easy-to-read manual/cookbook on Yocto.

trac.edgewall.org
A free tool combining wikki, bug tracking and source control.
Ideal for small teams.

atlassian.com
A larger complex of tools in the same department as “trac.”

yoctoproject.org
The homepage of the Yocto project with great documentation.

openwrt.org
The homepage of the OpenWRT Linux distribution.

Jack Ganssle: Better Firmware – Faster
Jack Ganssle has a fantastic way of combining the low-level hardware with the high-level software to create overview and efficient debugging.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset