Chapter 9. The Ethereum ecosystem

This chapter covers

  • A bird’s-eye view of the full Ethereum ecosystem
  • Decentralized address resolution with ENS
  • Decentralized content storage on Swarm and IPFS
  • External data access through oracles
  • Dapp frameworks and IDEs

In previous chapters, you learned about the main components of the Ethereum platform and how to implement and deploy a decentralized application using simple tools such as the Remix IDE and the geth console. You then improved the efficiency of the development cycle by partially automating the deployment with Node.js. You made further efficiency improvements by deploying and running your smart contracts on a private network and, ultimately, on Ganache, where you progressively reduced and almost eliminated the impact of infrastructural aspects of the Ethereum platform on the run and test cycle.

The tool set you’ve used so far has been pretty basic, but it has helped you understand every step of the build and deployment process of a smart contract. You’ve also learned about every step of the lifecycle of a transaction, from its creation, through a Web3 call, to its propagation to the network, to its mining, and ultimately to its persistence on the blockchain. Although you might have found these tools helpful and effective for getting started quickly and for learning various concepts in detail, if you decide to develop Ethereum applications on a regular basis, you’d use a different tool set.

This chapter gives you an overview of the wider Ethereum ecosystem, both from a platform point of view and from a development tool set point of view. You’ll learn about additional components of the Ethereum platform and alternative IDEs and frameworks that will allow you to develop and deploy Dapps with less effort. But before we start to explore the full Ethereum ecosystem, I’ll recap the current view of the platform and the development tool set.

9.1. The core components

Figure 9.1 summarizes all you know so far about the Ethereum platform and the development toolset.

Figure 9.1. Core components of the Ethereum platform you’ve learned so far: geth, Ethereum wallet, MetaMask, Ganache, Remix, solc, and Web3.js

Although you’ve installed the Go Ethereum client (geth) and the Ethereum wallet, you’re aware you could have installed alternative clients, such as cpp-ethereum (eth), Parity, Ethereum(J), or pyethapp. Most of these come with a related wallet. You also could have decided to connect to an external MetaMask node (in fact, an Infura node, as you’ll see later) with MetaMask or to a mock node with Ganache.

You’ve developed your smart contracts in Solidity using Remix (Browser Solidity). When needed, you’ve moved the code to text files and compiled them with the solc compiler. In theory, you could have implemented smart contracts in other EVM languages, such as Serpent or LLL, but currently Solidity is widely regarded as the most reliable and secure language. Time will tell if Serpent makes a comeback or new alternatives such as Viper start to gather momentum.

You interacted with the network, including your deployed contracts, in Web3.js, initially through the interactive geth console. Then you moved to Node.js for better extensibility and automation.

Web3.js is a JavaScript-specific high-level API that wraps the low-level JSON-RPC API. Other high-level APIs are available that target other languages, such as web3.j (for Java), NETEthereum (for .NET), and Ethereum.ruby (for ruby).

9.2. A bird’s-eye view of the full ecosystem

Figure 9.2 provides a full view of the current Ethereum ecosystem, where you can see an additional set of development IDEs and frameworks, such as Truffle, aimed at improving the development experience. UI frameworks such as meteor and Angular aren’t Ethereum-specific, but they’re widely adopted to build modern Dapp UIs. Also, generic testing frameworks such as Mocha and Jasmine are becoming a common feature of Dapp development environments.

You can also see additional infrastructural elements:

  • Ethereum Name Service (ENS)—This is a smart contract for the decentralized resolution of human-readable names, such as roberto.manning.eth, into Ethereum addresses, such as 0x829bd824b016326a401d083b33d092293333a830.
  • Swarm and IPFS—These are two competing networks for decentralized storage of content that Ethereum blockchain transactions can then reference through hash IDs (or friendly names resolved into hashes by ENS). Swarm comes directly under the Ethereum umbrella and is Ethereum-aware; IPFS is a general technology-agnostic protocol that provides similar functionality.
  • Oracle frameworks—These are smart contract frameworks (such as Oraclize) for accessing real-world data in a way that guarantees data authenticity and consistent processing of such data throughout the entire Ethereum network.
  • Whisper—This is a network for decentralized messaging that provides Ethereum smart contracts with asynchronous peer-to-peer communication, with resilience and privacy as main features. The Whisper API allows contracts to send messages with various degrees of security and privacy, from plain text and fully traceable to encrypted and virtually untraceable (so-called dark messages).
    Figure 9.2. Full view of the current Ethereum ecosystem, showing the items we haven’t yet covered in bold

  • Infura nodes—This is a set of Ethereum nodes that are hosted by Infura, a service owned by ConsenSys (the company also behind Truffle). Infura provides clients as a cloud service, with built-in security and privacy features. As for conventional cloud providers, Infura allows startups and independent developers to build Ethereum applications professionally without having to buy physical servers. MetaMask connects to these nodes.

The next few sections will examine in detail ENS, Swarm, IPFS, and oracle frameworks.

Whisper falls in the realm of message-oriented protocols. This is an advanced topic, so I won’t cover it further. But if you have experience in message-oriented applications and are eager to learn more, I encourage you to look at the Whisper documentation on the Ethereum wiki on GitHub (http://mng.bz/nQP4 and https://github.com/ethereum/wiki/wiki/Whisper).

From a conceptual point of view, Infura nodes work exactly like other full Ethereum nodes. Bear in mind, though, that Infura clients support a subset of the JSON-RPC standard, so you should check their technical documentation if you’re interested in exploring them further.

Before closing this chapter, I’ll briefly present the main development tools for building Dapps. When I move on to the next chapter, I’ll focus on Truffle, the main smart contract development IDE, which I’ll cover in detail through hands-on examples.

9.3. Decentralized address resolution with ENS

The Ethereum Name Service, also known as ENS, manages decentralized address resolution, offering a decentralized and secure way to reference resource addresses, such as account and contract addresses, through human-readable domain names. An Ethereum domain name is, as for internet domain names, a hierarchical dot-separated name. Each part of the domain name (delimited by dots) is called a label. Labels include the root domain at the right, for example eth, followed by the domain name at its immediate left, followed by child subdomains, moving further to the left, as illustrated in figure 9.3.

Figure 9.3. The structure of an ENS name. You can see the root domain, eth, at the far right, followed by the domain name at its left, and nested child subdomains moving from right to left.

For example, you could send Ether to roberto.manning.eth (which is a subdomain of eth) rather than to 0xe6f8d18d692eeb02c3321bb9a33542903073ba92, or you could reference a contract with simplecoin.eth rather than with its original deployment address: 0x3bcfb560e66094ca39616c98a3b685098d2e7766, as illustrated in figure 9.4. ENS also allows you to reference other resources, such as Swarm and IPFS content hashes (which we’ll meet later in the next section), through friendly names.

Figure 9.4. ENS resolves names into external (user) addresses, contract addresses, and Swarm content hashes. You can’t tell from the domain name itself if it’s mapped to an address or a Swarm hash. As you’ll see later, a domain name must be mapped explicitly to a specific name resolver for either an address or a Swarm hash (or some other resource identifier).

ENS is encapsulated as a smart contract, and because its logic and state are stored on the blockchain, and therefore decentralized across the Ethereum network, it’s considered inherently more secure than a centralized service such as the internet Domain Name Service (DNS). Another advantage of ENS is that it’s decentralized not only from an infrastructural point of view, but also from a governance point of view: domain names aren’t managed by a central authority, but they can be registered directly by the interested parties through registrars. A registrar is a smart contract that manages a specific root domain, such as eth. Domains are assigned to the winners of open auctions executed on the related registrar contract, and they also become the owners of the child subdomains.

9.3.1. ENS design

The ENS system is structured as three main components:

  • Registrar—This is a contract that manages domain ownership. You must claim a domain name through the registrar and associate it with one of your accounts before you can register specific full domain names associated with it. Specific registrars handle each root domain, such as .eth, which is the root domain for names associated with Ethereum mainnet addresses, or .swarm, which is the root domain for names associated with swarm content hashes. Note that you must perform the ownership of domain names pointing to TESTNET Ethereum addresses through a test registrar that manages the .test root domain. This is a separate registrar from the one managing the .eth root domain.
  • Resolvers—These are smart contracts that implement a common ABI interface specified in Ethereum Improvement Proposal (EIP) 137, which you can consult here: http://eips.ethereum.org/EIPS/eip-137. A resolver translates a domain name into a resource identifier. Each resolver is specific to one resource type. For example, there’s a resolver for Ethereum addresses (called public resolver), another resolver for IPFS content hashes, and so on.
  • Registry—This is, in a nutshell, a map between domain (or subdomain) names and domain name resolvers.

The simple design of the ENS registry, shown in figure 9.5, makes it easily extensible, so you can reference custom resolvers implementing address translation rules of any complexity. Also, it can support a new resource type in the future without needing any modification and redeployment of the registry: a domain name for a new resource type will point to a new resolver. Figure 9.6 shows the domain name resolution process.

Figure 9.5. The ENS registry design. The ENS registry contract is a map between resource types and related domain resolver contracts. In the future, it can support a new resource type by pointing a domain name (associated with the new resource type) to a new resolver. Domain ownership is registered through a specific registrar.

Figure 9.6. The domain name resolution process: 1. you query the Registry to identify the correct resolver; 2. you request the relevant resolver to translate the domain name into an address.

As you can see in figure 9.6, a domain name is resolved in a two-step process:

  1. You query the Registry to identify the correct resolver associated with the domain name you want to resolve, and the Registry returns the contract address of the relevant resolver.
  2. You request the relevant resolver to translate the domain name into a resource identifier, such as an Ethereum address or a Swarm hash.

Every mapping record stored on the registry contains the information shown in table 9.1.

Table 9.1. ENS registry mapping record

Field

Description

Example

Domain name For performance and privacy reasons, a hash of the domain name, called Namehash, is used rather than the domain name itself. Read the sidebar if you want to know more about this. 0x98d934feea78b34... (Namehash of Roberto.manning.eth)
Domain owner The address of the external (user) account or contract account owning the domain name 0xcEcEaA8edc0830C...
Domain name resolver The address of the resolver contract able to resolve the domain name for the related resource type 0x455abc566... (public resolver address)
Time to live This specifies how long the mapping record should be kept on the registry. It can be indefinite or a specified duration. 6778676878 (expiry date as UNIX epoch)
Namehash

For performance reasons and for the privacy of the domain owners, ENS works against a 32-byte hash of the domain name rather than its plain string representation. This hash is determined through a recursive algorithm called Namehash, which, if applied, for example, to roberto.manning.eth, works as follows:

  1. Split the full domain name into labels, delimited by the dots; order them from the last to the first; and add an empty label as a first item:
                labels = ['', 'eth', 'manning', 'roberto']
  2. Pick the first item. Because it’s empty, determine the associated namehash by setting it to 32 ‘0’ bytes. The namehash corresponding to an increasing part of the full domain name is called node. So far, here’s what you have:
       node =
    0x0000000000000000000000000000000000000000000000000000000000000000
  3. Pick the second label ('eth') and determine its associated label hash by applying the keccak256 hashing function:
        labelHash = keccak256('eth') =
    0x4f5b812789fc606be1b3b16908db13fc7a9adf7ca72641f84d75b47069d3d7f0 
  4. Determine the node associated with the second label by hashing the concatenation of the previous node with the current label hash:
        node = keccak256(node + labelhash) = 
    keccak256(
    0x000000000000000000000000000000000000000000000000000000000000000004f5b81278
     9fc606be1b3b16908db13fc7a9adf7ca72641f84d75b47069d3d7f0) = 
     0x93cdeb708b7545dc668eb9280176169d1c33cfd8ed6f04690a0bcc88a93fc4ae
  5. Pick the third item ('manning') and repeat steps 3 and 4.
  6. Pick the fourth item ('roberto') and repeat steps 3 and 4. Finally, the namehash of roberto.manning.eth is
     0x5fd962d5ca4599b3b64fe09ff7a630bc3c4032b3b33ecee2d79d4b8f5d6fc7a5
    You can get an idea of the output taken by the Namehash algorithm to hash roberto .manning.eth in the table. Namehash algorithm steps to hash Roberto.manning.eth

    Step

    Label

    labelHash

    keccak256 (node+labelHash)

    Node

    1 '' N/A N/A 0x000000000000...
    2 'eth' 0x4f5b812789fc... keccak256 (0x0000... 0x4f5b812789f...) 0x93cdeb708b7...
    3 'manning' 0x4b2455c1404... keccak256 (0x93cde... 4b2455c...) 0x03ae0f9c3e92...
    4 'roberto' 0x6002ea314e6 keccak256 (0x03e0... 6002ea3...) 0x5fd962d5ca4599b3b6
    Here’s a JavaScript implementation of the process from Nick Johnson’s ENS utility library ensutils.js (see next section for more details), which you can run in the geth console or in the Node.js console:
    function namehash(name) {
        var node =
    '0x0000000000000000000000000000000000000000000000000000
     000000000000';                                          1
        if (name !== '') {
            var labels = name.split(".");                      2
            for(var i = labels.length - 1; i >= 0; i--) {
                label = labels[i];                             3
                labelHash = web3.sha3(label);                  4
                node = web3.sha3(node + labelHash.slice(2), 
                {encoding: 'hex'});                            5
            }
        }
        return node.toString();                                6
    }

  • 1 Node corresponding to empty label ' '
  • 2 Splits the full domain name into its constituent labels
  • 3 Gets current label
  • 4 Calculates label hash
  • 5 Concatenates previous node with current label hash (removes '0x' from label hash) and calculates current node using hex encoding
  • 6 Returns final node as a string
Warning

The web3.sha3() function creates a keccak256 hash. It doesn’t follow the SHA-3 standard, as the name would suggest.

9.3.2. Registering a domain name

Enough theory! Let’s see how to register a domain name on the ENS instance running on the Ropsten testnet from the geth console.

First of all, download the ENS JavaScript utility library from here: http://mng.bz/vN9r. Place this JavaScript file in a folder, for example, C:ethereumens.

Warning

Although useful for learning ENS, the ENS JavaScript utility libraries ensutils.js and ensutils-testnet.js aren’t meant to be used to build a production Dapp.

Now, from an OS shell, start up geth against TESTNET, as you’ve done several times before. (Remember to use the --bootnodes option if peer nodes aren’t located quickly, as you did at the start of chapter 8.) Type the following:

C:Program Filesgeth>geth --testnet

Geth will start synchronizing, as expected. From a separate command shell, start an interactive console:

C:Program Filesgeth>geth attach ipc:\.pipegeth.ipc

Then import the ENS utility library on the interactive geth console you’ve attached:

>loadScript('c:/ethereum/ens/ensutils-testnet.js');

Registering a domain on the TESTNET network means registering it on the .test root domain rather than on .eth, which is associated with MAINNET, the public production network. This means you must use the test registrar.

The domain name I’ll be registering is roberto.manning.test. Pick a similar three-label domain name and adapt the instructions that I’m about to give you, accordingly.

Checking Domain Ownership

First of all, I have to check if anyone else already owns the manning domain. If someone does, I won’t be able to register my full domain name (roberto.manning.test); I’d have to ask the current owner to do it for me.

This is how you can check if the manning domain is free on the test registrar:

>var domainHash = web3.sha3('manning');
> 
>var domainExpiryEpoch = testRegistrar.expiryTimes(domainHash)
.toNumber() * 1000;
>var domainExpiryDate = new Date(domainExpiryEpoch);

Check the value of domainExpiryDate (by entering it at the prompt). If it’s earlier than today, the domain is free; otherwise, you must choose another domain and repeat the check.

Note

You might be wondering what happens in the unlikely event that ownership of 'manning' hasn’t been registered yet but another name with the same web.sha3() hash has been registered. If this happens, you won’t be able to register 'manning' because it would appear to the registrar as already taken.

Registering Domain Ownership

After checking that the account is free, you can claim it by registering it through the test registrar against one of your TESTNET accounts; for example, eth.accounts[0]. (Make sure accounts[0] has enough Ether to execute the transaction by checking, as usual: eth.getBalance(eth.accounts[0]); also, replace your accounts[0] password.) Enter the following:

>personal.unlockAccount(eth.accounts[0], 'PASSWORD_OF_YOUR_ACCOUNT_0');
>var tx1 = testRegistrar.register(domainHash,
eth.accounts[0], {from: eth.accounts[0]});

Check the value of tx1, and then check that the related transaction has been mined by going to Ropsten etherscan: https://ropsten.etherscan.io. Note that registering domain ownership on MAINNET is a more complex process. (See https://docs.ens.domains/en/latest/ for more details.)

Registering the domain name

Once the domain ownership transaction has been mined, it’s time to set up the domain name mapping configuration you saw in table 9.1. You already set some of the configuration (the domain account owner) by registering the domain ownership through the registrar. Now you have to configure the resolver and the target address that the domain name will be mapped to.

You can map your domain name to the public resolver (which, as you know, maps a domain name to a given Ethereum address) through the ENS registry as follows:

>tx2 = ens.setResolver(namehash('manning.test'),
publicResolver.address, {from: eth.accounts[0]});

Check on Ropsten etherscan if tx2 has been mined, then configure the public resolver to point your domain name to the target address (for example, your test accounts[1]), as follows:

>publicResolver.setAddr(namehash('manning.test'),
eth.accounts[1], {from: eth.accounts[0]});
Registering the Subdomain

Registering the ownership of a subdomain is slightly different from registering the ownership of a domain, as you don’t perform it through the registrar, but through the ENS registry. Assign the ownership of the subdomain Roberto.manning to accounts[2], as follows:

>ens.setSubnodeOwner(namehash('manning.test'),
web3.sha3('roberto'), eth.accounts[2], {from: eth.accounts[0]});
Warning

The account running the transaction must be the owner of the 'manning.test' domain: accounts[0].

Using accounts[2], the owner of the 'roberto.manning.test' subdomain, you can now map it to the public resolver as usual:

>ens.setResolver(namehash('roberto.manning.test'),
publicResolver.address, {from: eth.accounts[2]});

Finally, you can configure the public resolver to point your domain name to the target address (for example, your test accounts[3]), as follows:

>publicResolver.setAddr(namehash('manning.test'),
eth.accounts[3], {from: eth.accounts[2]});

9.3.3. Resolving a domain name

Resolving a domain name into an address is straightforward. Resolve 'manning.test' first:

>var domainName = 'manning.test';
>var domainNamehash = namehash(domainName);
>var resolverAddress = ens.resolver(domainNamehash);
>resolverContract.at(resolverAddress).addr(namehash(domainNamehash));

You’ll see

0x4e6c30154768b6bc3da693b1b28c6bd14302b578

and you can verify this is your accounts[1] address, as expected:

> eth.accounts[1]

This is a shortcut to resolve the domain name:

>getAddr(domainName);
0x4e6c30154768b6bc3da693b1b28c6bd14302b578

If you’re interested in learning more about ENS—for example, to claim an .eth domain name in MAINNET through a commit-reveal bid—I encourage you to consult the official documentation written by Nick Johnson, the creator of ENS. You can find it at https://docs.ens.domains/en/latest/.

9.4. Decentralized content storage

A common use case for decentralized applications is to store a sequence of documents proving, for example, the provenance of goods traded through the applications. A typical example is diamonds, which traditionally are accompanied by paper certificates showing that they come from legitimate mines and traders. For more complex supply chains, such as in the field of international trade finance (https://en.wikipedia.org/wiki/Trade_finance), which involves multiple parties, such as a supplier, the bank of the supplier, a shipping company, an end client, and their bank, the paperwork might be more voluminous. Storing the equivalent electronic documentation directly on the blockchain would work but wouldn’t be ideal for a couple of reasons:

  • The electronic documentation would bloat transactions referencing it, which would be processed more slowly.
  • Bigger transactions require more gas to process and are therefore more expensive.

An alternative solution would be to store the electronic documentation on an off-blockchain database and include in the transaction only a cryptographic hash of each of the documents, to prove their content. This solution isn’t perfect, though, because the off-blockchain database would be a centralized resource not easily accessible by the Ethereum nodes. Even if the decentralized application could access the database where the documentation was stored, having this centralized repository would be contrary to the spirit of decentralized applications.

An ideal solution instead would be based on a decentralized storage repository. This is exactly what the Swarm platform, which is partially associated with Ethereum, aims to provide. Another valid alternative would be to use the existing IPFS distributed storage network. Let’s explore these two options.

9.4.1. Swarm overview

Swarm is a content distribution platform whose main objective is to provide decentralized and redundant storage to Ethereum Dapps. It focuses specifically on holding and exposing smart contract data and code, as well as blockchain data.

Storage is decentralized in Swarm through a P2P network that makes it resistant to distributed denial of service (DDoS) attacks and censorship, and that provides fault tolerance and guarantees zero downtime because it has no single point of failure. The architecture of the P2P Swarm network, shown in figure 9.7, is similar to that of the Ethereum network: each node runs a Swarm client that manages local storage and communicates with its peer nodes through a common standard protocol called bzz. Currently, only one client implementation is available, written in the Go language, and it’s included in the Geth & Tools package you can download from the Go Ethereum website. The main difference with the Ethereum network is that all Ethereum nodes have the same copy of the blockchain database, whereas each Swarm node contains a different set of data, also illustrated in figure 9.7.

Figure 9.7. Architectural diagram of a Swarm network. The Swarm network, made of nodes each running a Swarm client, is similar to the Ethereum network, in which every node runs an Ethereum client. Contrary to Ethereum nodes, which all have the same copy of the blockchain data, each Swarm node contains a different set of data.

A Swarm node is linked to an Ethereum account known as a swarm base account. The (keccak 256-bit) hash of the address of the swarm base account determines the swarm base address, which is the address of a Swarm node within the Swarm network. A Swarm network is associated with a specific Ethereum network. For example, the main production Swarm network is associated with MAINNET, and a Swarm network is associated with the Ropsten Ethereum network. Because Swarm is part of the Ethereum technology stack, it makes full use of other components of the ecosystem, such as ENS.

When content is uploaded to Swarm, it’s broken down into 4 KB chunks that get scattered throughout the Swarm network. The upload process is illustrated in figure 9.8.

It involves the following steps:

  1. The caller uploads the content, typically a file, to the distributed preimage archive (DPA), which is the storage and retrieval gateway.
  2. The DPA calls a component called chunker.
  3. The chunker

    1. chops the content up into 4 KB pieces called chunks
    2. calculates cryptographic hashes of its chunks
  4. The hashes of the chunks (or blocks) are placed in a chunks-index document.
  5. If the chunks-index document is bigger than 4 KB, it’s chopped up into chunks whose hashes are then placed into a further document. This process goes on until the chunks are organized into a tree structure with a root index document at the top, followed by a layer of index chunks in the middle and the content chunks at the bottom, as illustrated in figure 9.9. This data structure is a Merkle tree, the same data structure a blockchain database uses to link its blocks.
    Figure 9.8. The Swarm upload process: 1. the caller uploads a file to the distributed preimage archive gateway; 2. the DPA sends the file to a chunker; 3. the chunker chops the file into 4 KB chunks and calculates a hash for each one; 4. the chunk hashes are placed in a chunk-index document; 5. the chunkindex document is chunked and reorganized in a Merkle tree structure, whose root hash is called root key; 6. the chunker stores each chunk onto the netStore against its hash; 7. the netStore distributes 4 KB chunks across the Swarm network; 8. the chunker returns the root key to the DPA; 9. finally, the DPA returns the root key to the caller.

  6. The chunker stores each chunk on the netStore against its hash key.
  7. The netStore is an implementation of a distributed hash table (DHT) across the Swarm network, so chunks are stored on many Swarm nodes. Because the key of this distributed hash table is a cryptographic hash key, which is a representation of the underlying content, this way of storing data is also known as content addressable storage (CAS).
  8. The chunker returns the hash key of the root index document, known as root key, and hands it back to the DPA.
  9. The DPA finally returns the root key to the caller. This will later be used to download the original file from Swarm.

Figure 9.9. Chunk and chunk-index Merkle tree structure. The document at the top contains the hashes of chunks of the initial chunk-index document (containing the hashes of all 4 KB chunks). The intermediate layer is made of chunks of the initial chunk-index document. The layer at the bottom contains the 4 KB chunks of the original file.

The download process goes through a similar workflow, but in reverse order, as shown in figure 9.10:

  1. A caller hands a root key to the DPA.
  2. The DPA calls the chunker, and it supplies the root key.
  3. The chunker retrieves the root chunk associated with the root key from the netStore, then walks the tree until it has retrieved all the chunks from the Swarm network. While chunks are flowing from their netStore location (the specific Swarm node they’re stored on) to the chunker, they get cached into each Swarm node they go through, so often if the same content is requested, subsequent downloads will be faster.
  4. The chunker reconstructs the file from the chunks and returns it to the DPA.
  5. The DPA returns the requested file to the caller.

From an operational point of view, the sustainability of the Swarm platform is based on monetary incentives aimed at encouraging and rewarding participants who provide the underlying storage resources. Storage is traded between participants who require it and those who provide it, so it tends to be allocated efficiently.

Figure 9.10. The Swarm download process: 1. a caller hands a root key to the DPA; 2. the DPA calls the chunker, and it supplies the root key; 3. the chunker retrieves the root chunk associated with the root key from the netStore, then walks the tree until it has retrieved all the chunks from the Swarm network; 4. the chunker reconstructs the file from the chunks and returns it to the DPA; 5. the DPA returns the requested file to the caller.

9.4.2. Uploading and downloading content to and from Swarm

In this section, I’ll show you how to upload content to Swarm, get its root key, and then download it back from Swarm using the root key.

Connecting to Swarm

The first step you have to take is to download the Swarm client, swarm.exe, from the Go Ethereum website. If you downloaded geth from the Geth & Tools archive (or installer) link, you should already have swarm.exe in the same folder you’re running geth from. Otherwise, go back to the Go Ethereum website and download the Geth & Tools 1.8.12 package, which I believe is the latest archive still containing swarm.exe. Unzip it and copy swarm.exe into the same folder where you’ve placed geth.exe. In my case, I’ve placed it here: C:Program Filesgeth.

Now start up geth against TESTNET. (Remember to use the --bootnodes option if peer nodes aren’t located quickly, as you did at the start of chapter 8.) Type the following:

C:Program Filesgeth>geth --testnet

Geth will start synchronizing, as expected. From a separate command shell, start an interactive console:

C:Program Filesgeth>geth attach ipc:\.pipegeth.ipc

Then, from the interactive console, get the address of your testnet accounts[1]:

> eth.accounts[1]
"0x4e6c30154768b6bc3da693b1b28c6bd14302b578"

You’ll run the Swarm client under this account by opening a new OS console and executing the following command from the folder where you placed the swarm executable (replacing your Ethereum testnet folder accordingly):

C:Program Filesgeth>swarm –datadir
 C:Users
oberAppDataRoamingEthereum	estnet
 --bzzaccount 0x4e6c30154768b6bc3da693b1b28c6bd14302b578

Table 9.2 explains the options I’ve used to start up the Swarm client.

Table 9.2. Options used to start up the Swarm client

Option

Purpose

--datadir Specifies the datadir path related to the environment to use—in our case, TESTNET (Ropsten)
--bzzaccount Specifies the Ethereum account to use—in our case, TESTNET accounts[1]

As you can see in figure 9.11, you’ll be asked to unlock accounts[1] by providing its password. Enter the password, as requested, and the client will start up with output similar to that in the screenshot in figure 9.12.

Figure 9.11. Unlocking the Ethereum account you’re using to start up the Swarm client

It might take a few minutes before your Swarm client synchronizes with a number of peers (by default up to a maximum of 25). Output similar to the following indicates capable peers have been found:

INFO [03-11|19:49:47] Peer faa9a1ae is capable (0/3)
INFO [03-11|19:49:47] found record <faa9a1aef3fb3b0792420a59f929907d86c0937d
 b9310d6835a46f44301faf05> in kaddb
INFO [03-11|19:49:47] syncronisation request sent with address: 00000000
 -00000000, index: 0-0, session started at: 0, last seen at: 0, latest
 key: 00000000

INFO [03-11|19:49:47] syncer started: address: -, index: 0-0, session
 started at: 933, last seen at: 0, latest key:
INFO [03-11|19:49:47] syncer[faa9a1ae]: syncing all history complete
INFO [03-11|19:49:50] Peer d3f2a5c8 is capable (0/3)
Figure 9.12. Swarm start-up output

Uploading Content

Now that you’re connected to the Swarm network, you can upload some sample text onto the network. Open a new OS console and submit this HTTP request to your Swarm client through curl:

C:Users
ober>curl -H "Content-Type: text/plain" 
 --data-binary "my sample text" http://localhost:8500/bzz:/

You’ll immediately get a response showing the root key associated with the submitted content:

eab8083835dec1952eae934eef05dda96dadbcd5d0685251e8c9faab1d0a0f58
Downloading Content

To get the content back from Swarm, you can now submit a new request that includes the root key you obtained:

C:Users
ober>curl
 http://localhost:8500/bzz:/eab8083835dec1952eae934eef05dda96dadbcd5d068
 5251e8c9faab1d0a0f58/

As expected, you’ll get back the text you submitted earlier:

my sample text

The official documentation is an excellent resource to learn more about Swarm and to try out more advanced features: http://mng.bz/4OBv. But you also should be aware that the Swarm initiative has been criticized by some members of the decentralized web community for duplicating the effort of IPFS, a project with similar objectives but with a more general purpose. The following section explains IPFS and the reason for the controversy.

9.4.3. IPFS overview

IPFS stands for InterPlanetary File System and, as you might guess from its name, is a hypermedia distribution protocol whose objective is to support a decentralized way of storing and sharing files. With Swarm, storage is distributed over a P2P network, which you can consider a distributed file system. The IPFS way of storing files provides the same benefits as the Swarm network, such as zero downtime and resistance to DDoS attacks and censorship.

Files aren’t stored in their entirety in a single network location; they’re broken down into blocks, which are then transformed into IPFS objects and scattered across the network. An IPFS object is a simple structure containing two properties

{
     Data—Byte array containing data in binary form
     Links—Array of Link objects, which link to other IPFS objects
}

where a Link object has the following structure:

{
      Name—String representing the name of the link
      Hash—Hash of the linked IPFS object 
      Size—Full size of linked IPFS document and all of its linked IPFS
 objects
}

Each IPFS object is referenced by its hash.

An example of the IPFS object associated with a small file that’s decomposed into a single file block is shown in the following listing.

Listing 9.1. IPFS object associated with a file containing a single block
{
    "Links":[],                               1
    "Data":"u0008u0002u0012u0019
 This is some sample text.u0018u0019"     2
}

  • 1 There are no links to other IPFS objects because the file is made of a single block.
  • 2 This is unstructured binary data contained in the file (up to a max of 256 KB).

An example of the IPFS object associated with a large file, bigger than 256 KB and broken down into multiple blocks, is shown in the following listing.

Listing 9.2. IPFS object of a file larger than 256 KB, split into various blocks
{
"Links":[                                               1
  {
     "Name":"",                                         2
     "Hash":
 "QmWXuN4UW2ZJ2fo5fj8xt7raMKvsihJJibUpwmtEhbHBci",    3
     "Size":262158                                      4
   },
   {     
      "Name":"",
      "Hash":"QmfHm32CQnagmHvNV5X715wxEEjgqADWpCeLPYvL9JNoMt",
       "Size":262158
   },
   . . .
   {
       "Name":"",
       "Hash":"QmXrgsJQGVxg7iH2tgQF8BV9dEhRrCVngc9tWg8VLFn7Es",
       "Size":13116
    }
],
  "Data":"u0008u0002u0018@ u0010 u0010 u0010 u0010 f"
}

  • 1 This file has been split into multiple blocks of 256 KB, each corresponding to an item in the Links list.
  • 2 Block name
  • 3 Block hash
  • 4 Block size (256 KB)

Each block referenced in the Links array is represented by a document like that shown in listing 9.1. The workflow followed by an IPFS client for uploading a file on IPFS is illustrated in figure 9.13.

Let’s follow the steps of the upload workflow in detail:

  1. A user uploads a file to an IPFS node.
  2. The IPFS node breaks the file down into blocks of a certain size (typically 256 KB).
  3. An IPFS object is created for each file block. This looks like the one shown in listing 9.1. A cryptographic hash is calculated for each IPFS object and associated with it.
  4. An IPFS object is created for the file. This contains links to IPFS objects associated with all the file blocks and looks like the IPFS object shown in listing 9.2. A cryptographic hash is calculated for this IPFS object.
    Figure 9.13. The IPFS upload process: 1. a user uploads a file to an IPFS node; 2. the IPFS node breaks down the file into 256 KB blocks; 3. an IPFS object is created for each file block; 4. an IPFS object is created for the file, and it contains links to IPFS objects associated with all the file blocks; 5. each block is stored at a different network location, and an index holding a map between block hashes and corresponding network locations is maintained on each node.

  5. Each block is stored at a different network location, and an index holding a map between block hashes and corresponding network locations is maintained on each node. You might have realized content is referenced by its own cryptographic hash key, as in the case of the Swarm platform, so you can also consider IPFS to be content addressable storage (CAS).

Given that content is referenced by its hash, this design is focused on managing efficiently immutable files. For example, only one copy of a document has to exist in the network because its hash must point only to one network location, so duplication is eliminated.

But IPFS is also capable of managing mutable documents by tracking their changes through versioning. When a file changes, only its amended blocks need to be hashed, stored on the network, and indexed, and the unaffected blocks will be reused. The workflow of the download process is shown in figure 9.14.

Here are the steps of the workflow in detail:

Figure 9.14. The IPFS file download process: 1. IPFS is queried for a file associated with a certain IPFS file object hash key; 2. the IPFS client requests the file object from the corresponding IPFS node; 3. the requested node returns the IPFS file object; 4. the IPFS client scans each link on the Links property of the IPFS file object; 5. each requested IPFS node returns the corresponding IPFS block object; 6. the original file is recomposed on the IPFS node serving the request, and it’s returned to the caller.

  1. A user queries IPFS for a file associated with a certain IPFS file object hash key.
  2. The IPFS client retrieves the network location of the IPFS file object associated with the provided hash key by looking it up on the local IPFS index, and then it requests the file object from the corresponding IPFS node.
  3. The requested node returns the IPFS file object.
  4. The IPFS client scans each link on the Links property of the IPFS file object. For each link, it retrieves the network location associated with the IPFS object key from the local index and then uses the network location to retrieve the corresponding IPFS block object.
  5. Each requested IPFS node returns the corresponding IPFS block object.
  6. The original file is recomposed on the IPFS node serving the request, and it’s returned to the caller.

Contrary to Swarm, IPFS gives no direct incentives to its P2P participants for contributing to the network’s file storage resources, and it relies on FileCoin, a separate but related initiative based on the Bitcoin blockchain, to reward active participants.

If you want to learn more about IPFS, download the client and give it a go at https://ipfs.io/docs/getting-started/. I recommend you also have a look at the Git book The Decentralized Web Primer, which has various tutorials on how to install an IPFS client and how to interact with the network and examine IPFS objects through common operations such as uploading and then downloading a file: http://mng.bz/QQxQ. (Click the green Read button to access the content.)

9.4.4. Swarm vs. IPFS

At this point, do you think Swarm is duplicating the effort of IPFS, as some members of the decentralized web have argued? Now that you know about both decentralized content management infrastructures, you can probably judge for yourself whether the Swarm initiative has been worthwhile. Table 9.3, which summarizes the main features of both platforms, might help you answer the question.

Table 9.3. Comparison of Swarm vs. IPFS

Feature

Swarm

IPFS

Storage architecture Decentralized Decentralized
Network architecture P2P P2P
Content Accessible Storage Yes Yes
Block/chunk size 4 KB 256 KB
Native integration with Ethereum Yes No
Incentive strategy Built-in External (through FileCoin)

Fans of the Swarm platform argue that its smaller chunk size, which allows much lower transmission latency, and its deeper integration with Ethereum are by themselves two key reasons for the existence of Swarm. You can find further analysis of the difference between Swarm and IPFS online on various forums, such as Ethereum stack exchange: http://mng.bz/Xg0p.

9.5. Accessing external data through oracles

Conventional web applications consume a variety of external services, typically by performing REST API calls or invoking legacy web services. You might be surprised to hear that this isn’t possible in the Dapp world. By design, Ethereum contracts can’t access external sources. This is to avoid two main set of issues:

  • Trust issues—Participants might be wary about the authenticity of the data and its potential manipulation before making it into the blockchain.
  • Technical issues—The data provider might struggle to serve thousands of simultaneous requests coming from the Ethereum network, therefore compromising the block creation and validation process.

How do you get external data into your smart contract so you can work around the restrictions that the Ethereum infrastructure imposes and be confident about the data’s authenticity? You do it through oracles. An oracle is, in short, a bridge between the blockchain network and the outside world. It takes care of fetching the queried data from external data providers, and then it returns it to the requesting contract together with a proof of authenticity. Having the process arranged that way, you can see an oracle as a middleman that merely plays a facilitating role. Although it’s a point of centralization, the requesting contract doesn’t need to trust the oracle because the oracle can’t modify the data it’s returning without invalidating it against the proof of authenticity (which the end user can verify). Figure 9.15 shows the main components that are part of a typical oracle-based data-feeding solution:

  • A contract—This executes a query to retrieve some data.
  • An oracle—This connects the contract to the relevant data provider by resolving the query and fetching the data from the data provider.
    Figure 9.15. An oracle is a bridge between the blockchain network and the outside world. It takes care of fetching the requested data from external data providers and returns it to the requesting contract with a proof of authenticity.

  • A set of data sources—These might include REST APIs, legacy web services, online random generators, or online calculators, for example.
  • TLSNotary—This service generates cryptographic proofs of online data.
  • IPFS store—This is where data returned together gets stored with proof of authenticity for later off-blockchain verification, if needed.

9.5.1. Feeding oracles

You can use two main strategies for feeding an oracle so consumers can trust its data:

  1. Independent participants can feed an oracle. In this case, the oracle aggregates the original data coming from the different participants through a consensus model, and then it feeds the data to the consumer.
  2. A single data provider can feed an oracle. In this case, the oracle supplies the consumer a copy of the original data, accompanied with a proof of authenticity of that data.
Oracle Fed by Independent Participants

When independent participants feed an oracle, the approved data set is generated on a consensus basis, for example by averaging numeric values or selecting the most frequent non-numeric values. This way of feeding data, which happens in a decentralized fashion and is subject to a consensus, seems to naturally fit the spirit of decentralized applications. But it has various drawbacks:

  • A high number of feeders might be necessary to generate a reliable data set.
  • The oracle provider relies on feeders constantly keeping up with new data requests.
  • All the data feeders might expect to get paid regardless of the quality of their data. This might prove expensive for the oracle provider.
Oracle Fed by the Provider from a Single Data Source

When a single source feeds an oracle, it demonstrates that the exposed data is genuine and untampered with by returning it to a client together with a proof of authenticity document. Services such as TLSNotary can generate the document, and it’s based on various technologies, such as auditable virtual machines and trusted execution environments.

Oraclize offers one of the most popular frameworks for feeding smart contracts from a single source. This solution has two main advantages with respect to oracles fed by multiple participants:

  1. Dapp developers and users don’t need to trust Oraclize, as they can verify the truthfulness of the data independently against the proof of authenticity, both on-chain from within contract code and off-chain through web verification tools.
  2. Data providers don’t need to implement new ways of distributing data, on top of their current web services or web APIs, to feed decentralized applications.

Enough talking! I’ll now show you how to build your first oracle consumer contract.

9.5.2. Building a data-aware contract with Oraclize

If you want to hook a contract into Oraclize, you have to

  • import a Solidity file named oraclizeAPI.sol, available from the Oraclize GitHub repository
  • inherit your contract from a base contract called usingOraclize

Your contract, as illustrated in the sample oracle shown in listing 9.3 (which comes from the Oraclize documentation), should contain

  • one or more state variables holding the latest value of a copy of the external data being requested; in this example, ETHXBT (the Ether to Bitcoin exchange rate)
  • an update() function that an end user can invoke to refresh the local copy of the external data through a request to Oraclize
  • a callback function named __callback, which is invoked from the result transaction that Oraclize produces
Listing 9.3. Contract providing ETHXBT rate from the Kraken exchange through Oraclize
pragma solidity ^0.4.0;
import "github.com/oraclize/ethereum-api/
 oraclizeAPI.sol";                                         1

contract KrakenPriceTicker is usingOraclize {                2
    
    string public ETHXBT;                                    3
    
    event newOraclizeQuery(string description);              4
    event newKrakenPriceTicker(string price);                5
    

    function KrakenPriceTicker() {                           6
        oraclize_setProof(proofType_TLSNotary 
            | proofStorage_IPFS);                            7
        update();                                            8
    }

    function __callback(bytes32 myid, 
        string result, bytes proof) {                        9
        if (msg.sender != oraclize_cbAddress()) throw;
        ETHXBT = result;                                     10
        newKrakenPriceTicker(ETHXBT);                        11
        update();                                            12
    }

    
    function update() payable {                              13
        if (oraclize.getPrice("URL") > this.balance) {       14
            newOraclizeQuery("Oraclize query was NOT sent, 
 please add some ETH to cover for the query fee");
        } else {

            newOraclizeQuery("Oraclize query was sent, 
 standing by for the answer..");
            oraclize_query(60, "URL",
      "json(https://api.kraken.com/0/public/Ticker?pair=ETHXBT).result.XETHXXB
      T.c.0");                                                15
        }
    }
    
} 

  • 1 Imports the Oraclize client code from their GitHub repository
  • 2 Inherits from the base contract usingOraclize
  • 3 State variable holding the external data: exchange rate for Ether to Bitcoin from Kraken
  • 4 Event logging whether the data query has been sent to Oraclize
  • 5 Event logging whether Oraclize has returned the requested data
  • 6 Contract constructor
  • 7 Specifies that the data requested should be accompanied by TLSNotary proof and that the proof should get stored on IPFS
  • 8 Sets the ETHXBT state variable when the contract is created
  • 9 Callback invoked by Oraclize when returning the requested data to the contract
  • 10 Updates the ETHXBT state with the value that Oraclize returned
  • 11 Logs the requested data that Oraclize has returned
  • 12 Triggers a new update so that the contract keeps refreshing ETHXBT continuously
  • 13 Triggers the update of ETHXBT, which can be invoked by an end user or internally, as mentioned previously
  • 14 Checks if the contract has enough Ether to fund the data request to Oraclize
  • 15 Data request query to Oraclize
Oraclize Data Request

Look closely at the data request to the Oraclize engine within the update method:

oraclize_query(60, "URL",
     "json(https://api.kraken.com/0/public/Ticker?pair=ETHXBT)
.result.XETHXXBT.c.0");

The data request is performed by calling the oraclize_query() function, inherited from the usingOraclize base contract, with a set of parameters, as shown in figure 9.16:

  • Request delay—Number of seconds that should be waited before retrieving the data (can also be an absolute timestamp in the future)
  • Data source type—Oraclize supports various data sources types, but we’ll focus mainly on the following:

    • URL—Website or HTTP API endpoint
    • IPFS—Identifier of an IPFS file (content hash)
  • Query—This is a single parameter or an array of parameters whose values depend on the data source type. For example, for requests of type URL, if you supply only one parameter (the URL of the data source), the call is assumed to be an HTTP GET request. If you supply two parameters, the second is assumed to be the body of an HTTP POST request. For a request of type IPFS, the only parameter that you should supply is the IPFS content hash.
Figure 9.16. oraclize_query() parameters

As you can see in figure 9.16, results are extracted from the query through a result parser, which depends on the nature of the data source being called. Table 9.4 summarizes the supported parsers.

Table 9.4. Oraclize query result parsers

Parser type

Parser identifier

Description

JSON parser json Converts results to a JSON object, which you can extract specific properties from
XML parser xml Typically parses legacy web service results
HTML parser html Useful for HTML scraping
Binary helper binary Can extract items from binary results with slice(offset, length)
Results Callback Function

Now that you’ve learned how to perform a data request, we’ll look at how the Oraclize engine responds with the results. As you saw in figure 9.15, when processing a request, the Oraclize engine grabs the results from the relevant data source, and then it creates a result transaction, which it sends back to the Ethereum network. This transaction is also known as an Oraclize callback transaction, because during its execution, it calls back the oracle contract performing the request on its __callback function:

    function __callback(bytes32 myid, string result, bytes proof) {
        if (msg.sender != oraclize_cbAddress()) throw;
        ETHXBT = result;
        newKrakenPriceTicker(ETHXBT);
        update();
    }

When you call the __callback function from the result transaction, the value of the ETHXBT state variable is updated. You can then use it in the rest of the code.

9.5.3. Running the data-aware contract

If you want to run the data-aware contract from listing 9.3, you need the Oraclize Remix plugin, which directly references the oraclizeAPI.sol file, including using the Oraclize contract from GitHub: http://mng.bz/y1ry.

A dialog box will appear, warning that “Remix is going to load the extension “Oraclize” located at https://remix-plugin.oraclize.it. Are you sure to load this external extension?” Click OK.

I encourage you to try out the KrakenPriceTicker.sol, which you can find already set up within the Gist menu on the left side of the screen (Gist > KrakenPriceTicker). Before running it

  • check and make sure the compiler version is set to 0.4.24+commit.e67f0147 in the Solidity Version panel (on the Settings tab)
  • check and make sure the Environment is set to JavaScript VM (on the Run tab)

After having done so, do the following:

  1. Open the Run tab and click Deploy.
  2. Click the KrakenPriceTicker drop-down in the bottom Deployed Contracts panel.
  3. Click Update. (If you want to emulate the behavior of a contract call, you could also set the Value field at the top of the screen, for example to 20 Finney, but within Remix, this isn’t necessary.) At this point, the value of ETHXBT gets updated.
  4. Click the ETHXBT button to check the value of the exchange rate.

9.6. Dapp frameworks and IDEs

Four categories of tools can improve the Dapp development cycle:

  • Development IDEs
  • Development frameworks
  • Testing frameworks
  • Web UI frameworks

9.6.1. Development IDEs

Development IDEs and development frameworks are tools that help you speed up the development cycle. Although IDEs and development frameworks offer similar functionality, the former are slightly more focused on code editing and compilation, whereas the latter offer powerful deployment capabilities.

For a few months, Ethereum Studio appeared to be the de facto IDE for developing Ethereum Dapps, because it provided good code editing capabilities coupled with Web3 integration and smooth contract deployment functionality. But then ether.camp, the company behind it, stopped supporting it. As a result, developers are advised to instead use generic code editing tools, such as Sublime, Atom, Visual Studio Code, Vi, and Emacs, configured with the related Solidity plugin.

9.6.2. Development frameworks

The objective of Ethereum development frameworks is to streamline the development cycle and allow developers to focus on writing code rather than spending most of their time compiling it, redeploying it, and retesting it manually.

Various third-party smart contract frameworks have appeared since the launch of the Ethereum platform:

  • Truffle
  • Populus
  • Dapp (formerly known as Dapple)
  • Embark
Truffle

Truffle is probably the most advanced Ethereum development framework, and it focuses mainly on simplifying the building, testing, packaging, and deployment of Solidity contracts. It’s distributed as a Node.js package and provides a REPL console.

Truffle’s key selling point is migration—the way this framework manages the scripting and configuration of contract deployment. This is the framework you’ll use in the next few chapters.

Populus

Populus is functionally similar to Truffle in that it’s designed to simplify the compile-test-deploy cycle by working on smart contract projects organized with a specific folder structure. It provides configuration management that allows you to progress smoothly throughout development from an in-memory blockchain such as Ganache, to a private internal network, and finally to a public one. The peculiarity of Populus with respect to other frameworks is that it allows a developer to script unit tests or deployment instructions in python.

Embark

Embark aims to be a platform-agnostic Dapp framework to simplify the development and deployment of any decentralized application. This framework simplifies the management of multicontract Dapps, and you can configure it to automatically deploy a contract when a code change is detected. It allows decentralized storage through the IPFS protocol (and Swarm) and decentralized messaging through the Whisper protocol.

Dapp

The Dapp framework is geared mainly toward the Linux world and is distributed through the Nix Package manager. The emphasis of this framework is on contract packaging under the Ethereum Smart Contract Packaging Specification (https://github.com/ethereum/EIPs/issues/190) and contract code storage decentralization through the IPFS protocol, which we examined in section 9.4. Dapp also provides a unit testing facility through ethrun and ds-test.

Deciding which development framework to adopt might be difficult, as all of them offer similar compile-test-deploy functionality, although delivered in slightly different ways. It might sound obvious, but the best way to determine which one suits your needs best is to try out all of them.

Given the bias of the Ethereum platform toward JavaScript, it’s natural that several generic JavaScript frameworks have become a common feature of the Ethereum ecosystem. Let’s see why you should consider including JavaScript testing and UI frameworks in your development environment.

9.6.3. Testing frameworks

Using a generic JavaScript testing framework, as opposed to the unit testing functionality that a main development framework (such as Truffle or embark) offers, provides

  • more advanced unit testing capabilities, for example, support for async calls, exit status values for continuous integration systems, timeout handling, meta-generation of test cases, and more extensibility around the use of assert libraries
  • better system testing automation: you can automate tests involving end-to-end interaction through a private or public test network, and they can handle timeouts, retries, and other use cases around communication with the contract

The two most popular JavaScript unit testing frameworks used for developing decentralized applications are Mocha and Jasmine, and you’ll be using Mocha in the next few chapters. Let’s now move on to the web UI frameworks.

9.6.4. Web UI frameworks

Although the UI is an important element of a Dapp, because it connects the end user with the backend smart contracts, the Ethereum platform doesn’t yet fully support any technology to develop the presentation layer of an Ethereum application. Because you can include and reference web3.js on the JavaScript of a plain html5 web page, it’s natural to think that an easy way of exposing a Dapp is through web pages.

Given the abundance of excellent JavaScript UI frameworks, it’s hard to recommend any one framework in particular. But it’s worth mentioning that frameworks such as Meteor, Angular, Vue, and, more recently, React are getting increasing traction in the Ethereum community. As far as we’re concerned, we’ll stick to minimalistic solutions based on plain HTML and JavaScript, but feel free to embellish the UI code with the framework of your choice.

Summary

  • The previous chapters introduced a restricted view of the Ethereum ecosystem, limited to the following:

    • Core infrastructural components—The Go Ethereum (geth) client, the Ethereum wallet, MetaMask, and Ganache
    • Core development tools—Solidity (the EVM smart contract language), Remix (the online Solidity IDE), solc (the solidity compiler), JSON-RPC (the low-level Ethereum client API), Web3.js (a high-level Ethereum client API written in JavaScript), Node.js (not Ethereum-specific JavaScript runtime)
  • The Ethereum ecosystem includes a wider set of infrastructural components, such as ENS (for decentralized name resolution to addresses), Swarm and IPFS (for decentralized content storage), Whisper (for decentralized messaging), oracles (for importing data from public web-based data providers), and Infura (for managed Ethereum nodes).
  • The Ethereum Name Service, also known as ENS, offers a decentralized and secure way to reference resource addresses, such as account and contract addresses, through human-readable domain names. It has objectives similar to the internet DNS.
  • Storing relatively big content on the blockchain isn’t recommended because it’s clumsy and expensive. A better solution is to use decentralized storage systems, such as Swarm and IPFS.
  • Swarm is based on the Ethereum technology stack and is Ethereum network-aware, and it’s often the preferred solution for storing content off-chain that can be referenced on the Ethereum blockchain through cryptographic hash-based identifiers.
  • IPFS is a technology-agnostic protocol for content storage and offers a more widely known and tested solution, at the expense of inferior performance and looser Ethereum integration.
  • Oracles, such as Oraclize, allow smart contracts to import data from outside the Ethereum network and accompany it with a proof of authenticity.
  • The Ethereum ecosystem also includes a wider set of development tools, such as Truffle, the main smart contract framework; generic JavaScript testing frameworks, such as Mocha and Jasmine; and JavaScript web UI frameworks, such as Angular, ReactJS, and Meteor.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset