In previous chapters, you learned about the main components of the Ethereum platform and how to implement and deploy a decentralized application using simple tools such as the Remix IDE and the geth console. You then improved the efficiency of the development cycle by partially automating the deployment with Node.js. You made further efficiency improvements by deploying and running your smart contracts on a private network and, ultimately, on Ganache, where you progressively reduced and almost eliminated the impact of infrastructural aspects of the Ethereum platform on the run and test cycle.
The tool set you’ve used so far has been pretty basic, but it has helped you understand every step of the build and deployment process of a smart contract. You’ve also learned about every step of the lifecycle of a transaction, from its creation, through a Web3 call, to its propagation to the network, to its mining, and ultimately to its persistence on the blockchain. Although you might have found these tools helpful and effective for getting started quickly and for learning various concepts in detail, if you decide to develop Ethereum applications on a regular basis, you’d use a different tool set.
This chapter gives you an overview of the wider Ethereum ecosystem, both from a platform point of view and from a development tool set point of view. You’ll learn about additional components of the Ethereum platform and alternative IDEs and frameworks that will allow you to develop and deploy Dapps with less effort. But before we start to explore the full Ethereum ecosystem, I’ll recap the current view of the platform and the development tool set.
Figure 9.1 summarizes all you know so far about the Ethereum platform and the development toolset.
Although you’ve installed the Go Ethereum client (geth) and the Ethereum wallet, you’re aware you could have installed alternative clients, such as cpp-ethereum (eth), Parity, Ethereum(J), or pyethapp. Most of these come with a related wallet. You also could have decided to connect to an external MetaMask node (in fact, an Infura node, as you’ll see later) with MetaMask or to a mock node with Ganache.
You’ve developed your smart contracts in Solidity using Remix (Browser Solidity). When needed, you’ve moved the code to text files and compiled them with the solc compiler. In theory, you could have implemented smart contracts in other EVM languages, such as Serpent or LLL, but currently Solidity is widely regarded as the most reliable and secure language. Time will tell if Serpent makes a comeback or new alternatives such as Viper start to gather momentum.
You interacted with the network, including your deployed contracts, in Web3.js, initially through the interactive geth console. Then you moved to Node.js for better extensibility and automation.
Web3.js is a JavaScript-specific high-level API that wraps the low-level JSON-RPC API. Other high-level APIs are available that target other languages, such as web3.j (for Java), NETEthereum (for .NET), and Ethereum.ruby (for ruby).
Figure 9.2 provides a full view of the current Ethereum ecosystem, where you can see an additional set of development IDEs and frameworks, such as Truffle, aimed at improving the development experience. UI frameworks such as meteor and Angular aren’t Ethereum-specific, but they’re widely adopted to build modern Dapp UIs. Also, generic testing frameworks such as Mocha and Jasmine are becoming a common feature of Dapp development environments.
You can also see additional infrastructural elements:
The next few sections will examine in detail ENS, Swarm, IPFS, and oracle frameworks.
Whisper falls in the realm of message-oriented protocols. This is an advanced topic, so I won’t cover it further. But if you have experience in message-oriented applications and are eager to learn more, I encourage you to look at the Whisper documentation on the Ethereum wiki on GitHub (http://mng.bz/nQP4 and https://github.com/ethereum/wiki/wiki/Whisper).
From a conceptual point of view, Infura nodes work exactly like other full Ethereum nodes. Bear in mind, though, that Infura clients support a subset of the JSON-RPC standard, so you should check their technical documentation if you’re interested in exploring them further.
Before closing this chapter, I’ll briefly present the main development tools for building Dapps. When I move on to the next chapter, I’ll focus on Truffle, the main smart contract development IDE, which I’ll cover in detail through hands-on examples.
The Ethereum Name Service, also known as ENS, manages decentralized address resolution, offering a decentralized and secure way to reference resource addresses, such as account and contract addresses, through human-readable domain names. An Ethereum domain name is, as for internet domain names, a hierarchical dot-separated name. Each part of the domain name (delimited by dots) is called a label. Labels include the root domain at the right, for example eth, followed by the domain name at its immediate left, followed by child subdomains, moving further to the left, as illustrated in figure 9.3.
For example, you could send Ether to roberto.manning.eth (which is a subdomain of eth) rather than to 0xe6f8d18d692eeb02c3321bb9a33542903073ba92, or you could reference a contract with simplecoin.eth rather than with its original deployment address: 0x3bcfb560e66094ca39616c98a3b685098d2e7766, as illustrated in figure 9.4. ENS also allows you to reference other resources, such as Swarm and IPFS content hashes (which we’ll meet later in the next section), through friendly names.
ENS is encapsulated as a smart contract, and because its logic and state are stored on the blockchain, and therefore decentralized across the Ethereum network, it’s considered inherently more secure than a centralized service such as the internet Domain Name Service (DNS). Another advantage of ENS is that it’s decentralized not only from an infrastructural point of view, but also from a governance point of view: domain names aren’t managed by a central authority, but they can be registered directly by the interested parties through registrars. A registrar is a smart contract that manages a specific root domain, such as eth. Domains are assigned to the winners of open auctions executed on the related registrar contract, and they also become the owners of the child subdomains.
The ENS system is structured as three main components:
The simple design of the ENS registry, shown in figure 9.5, makes it easily extensible, so you can reference custom resolvers implementing address translation rules of any complexity. Also, it can support a new resource type in the future without needing any modification and redeployment of the registry: a domain name for a new resource type will point to a new resolver. Figure 9.6 shows the domain name resolution process.
As you can see in figure 9.6, a domain name is resolved in a two-step process:
Every mapping record stored on the registry contains the information shown in table 9.1.
Description |
Example |
|
---|---|---|
Domain name | For performance and privacy reasons, a hash of the domain name, called Namehash, is used rather than the domain name itself. Read the sidebar if you want to know more about this. | 0x98d934feea78b34... (Namehash of Roberto.manning.eth) |
Domain owner | The address of the external (user) account or contract account owning the domain name | 0xcEcEaA8edc0830C... |
Domain name resolver | The address of the resolver contract able to resolve the domain name for the related resource type | 0x455abc566... (public resolver address) |
Time to live | This specifies how long the mapping record should be kept on the registry. It can be indefinite or a specified duration. | 6778676878 (expiry date as UNIX epoch) |
For performance reasons and for the privacy of the domain owners, ENS works against a 32-byte hash of the domain name rather than its plain string representation. This hash is determined through a recursive algorithm called Namehash, which, if applied, for example, to roberto.manning.eth, works as follows:
labels = ['', 'eth', 'manning', 'roberto']
node = 0x0000000000000000000000000000000000000000000000000000000000000000
labelHash = keccak256('eth') = 0x4f5b812789fc606be1b3b16908db13fc7a9adf7ca72641f84d75b47069d3d7f0
node = keccak256(node + labelhash) = keccak256( 0x000000000000000000000000000000000000000000000000000000000000000004f5b81278 9fc606be1b3b16908db13fc7a9adf7ca72641f84d75b47069d3d7f0) = 0x93cdeb708b7545dc668eb9280176169d1c33cfd8ed6f04690a0bcc88a93fc4ae
0x5fd962d5ca4599b3b64fe09ff7a630bc3c4032b3b33ecee2d79d4b8f5d6fc7a5You can get an idea of the output taken by the Namehash algorithm to hash roberto .manning.eth in the table. Namehash algorithm steps to hash Roberto.manning.eth
Step |
Label |
labelHash |
keccak256 (node+labelHash) |
Node |
---|---|---|---|---|
1 | '' | N/A | N/A | 0x000000000000... |
2 | 'eth' | 0x4f5b812789fc... | keccak256 (0x0000... 0x4f5b812789f...) | 0x93cdeb708b7... |
3 | 'manning' | 0x4b2455c1404... | keccak256 (0x93cde... 4b2455c...) | 0x03ae0f9c3e92... |
4 | 'roberto' | 0x6002ea314e6 | keccak256 (0x03e0... 6002ea3...) | 0x5fd962d5ca4599b3b6 |
function namehash(name) { var node = '0x0000000000000000000000000000000000000000000000000000 000000000000'; 1 if (name !== '') { var labels = name.split("."); 2 for(var i = labels.length - 1; i >= 0; i--) { label = labels[i]; 3 labelHash = web3.sha3(label); 4 node = web3.sha3(node + labelHash.slice(2), {encoding: 'hex'}); 5 } } return node.toString(); 6 }
The web3.sha3() function creates a keccak256 hash. It doesn’t follow the SHA-3 standard, as the name would suggest.
Enough theory! Let’s see how to register a domain name on the ENS instance running on the Ropsten testnet from the geth console.
First of all, download the ENS JavaScript utility library from here: http://mng.bz/vN9r. Place this JavaScript file in a folder, for example, C:ethereumens.
Although useful for learning ENS, the ENS JavaScript utility libraries ensutils.js and ensutils-testnet.js aren’t meant to be used to build a production Dapp.
Now, from an OS shell, start up geth against TESTNET, as you’ve done several times before. (Remember to use the --bootnodes option if peer nodes aren’t located quickly, as you did at the start of chapter 8.) Type the following:
C:Program Filesgeth>geth --testnet
Geth will start synchronizing, as expected. From a separate command shell, start an interactive console:
C:Program Filesgeth>geth attach ipc:\.pipegeth.ipc
Then import the ENS utility library on the interactive geth console you’ve attached:
>loadScript('c:/ethereum/ens/ensutils-testnet.js');
Registering a domain on the TESTNET network means registering it on the .test root domain rather than on .eth, which is associated with MAINNET, the public production network. This means you must use the test registrar.
The domain name I’ll be registering is roberto.manning.test. Pick a similar three-label domain name and adapt the instructions that I’m about to give you, accordingly.
First of all, I have to check if anyone else already owns the manning domain. If someone does, I won’t be able to register my full domain name (roberto.manning.test); I’d have to ask the current owner to do it for me.
This is how you can check if the manning domain is free on the test registrar:
>var domainHash = web3.sha3('manning'); > >var domainExpiryEpoch = testRegistrar.expiryTimes(domainHash) .toNumber() * 1000; >var domainExpiryDate = new Date(domainExpiryEpoch);
Check the value of domainExpiryDate (by entering it at the prompt). If it’s earlier than today, the domain is free; otherwise, you must choose another domain and repeat the check.
You might be wondering what happens in the unlikely event that ownership of 'manning' hasn’t been registered yet but another name with the same web.sha3() hash has been registered. If this happens, you won’t be able to register 'manning' because it would appear to the registrar as already taken.
After checking that the account is free, you can claim it by registering it through the test registrar against one of your TESTNET accounts; for example, eth.accounts[0]. (Make sure accounts[0] has enough Ether to execute the transaction by checking, as usual: eth.getBalance(eth.accounts[0]); also, replace your accounts[0] password.) Enter the following:
>personal.unlockAccount(eth.accounts[0], 'PASSWORD_OF_YOUR_ACCOUNT_0'); >var tx1 = testRegistrar.register(domainHash, eth.accounts[0], {from: eth.accounts[0]});
Check the value of tx1, and then check that the related transaction has been mined by going to Ropsten etherscan: https://ropsten.etherscan.io. Note that registering domain ownership on MAINNET is a more complex process. (See https://docs.ens.domains/en/latest/ for more details.)
Once the domain ownership transaction has been mined, it’s time to set up the domain name mapping configuration you saw in table 9.1. You already set some of the configuration (the domain account owner) by registering the domain ownership through the registrar. Now you have to configure the resolver and the target address that the domain name will be mapped to.
You can map your domain name to the public resolver (which, as you know, maps a domain name to a given Ethereum address) through the ENS registry as follows:
>tx2 = ens.setResolver(namehash('manning.test'), publicResolver.address, {from: eth.accounts[0]});
Check on Ropsten etherscan if tx2 has been mined, then configure the public resolver to point your domain name to the target address (for example, your test accounts[1]), as follows:
>publicResolver.setAddr(namehash('manning.test'), eth.accounts[1], {from: eth.accounts[0]});
Registering the ownership of a subdomain is slightly different from registering the ownership of a domain, as you don’t perform it through the registrar, but through the ENS registry. Assign the ownership of the subdomain Roberto.manning to accounts[2], as follows:
>ens.setSubnodeOwner(namehash('manning.test'), web3.sha3('roberto'), eth.accounts[2], {from: eth.accounts[0]});
The account running the transaction must be the owner of the 'manning.test' domain: accounts[0].
Using accounts[2], the owner of the 'roberto.manning.test' subdomain, you can now map it to the public resolver as usual:
>ens.setResolver(namehash('roberto.manning.test'), publicResolver.address, {from: eth.accounts[2]});
Finally, you can configure the public resolver to point your domain name to the target address (for example, your test accounts[3]), as follows:
>publicResolver.setAddr(namehash('manning.test'), eth.accounts[3], {from: eth.accounts[2]});
Resolving a domain name into an address is straightforward. Resolve 'manning.test' first:
>var domainName = 'manning.test'; >var domainNamehash = namehash(domainName); >var resolverAddress = ens.resolver(domainNamehash); >resolverContract.at(resolverAddress).addr(namehash(domainNamehash));
You’ll see
0x4e6c30154768b6bc3da693b1b28c6bd14302b578
and you can verify this is your accounts[1] address, as expected:
> eth.accounts[1]
This is a shortcut to resolve the domain name:
>getAddr(domainName); 0x4e6c30154768b6bc3da693b1b28c6bd14302b578
If you’re interested in learning more about ENS—for example, to claim an .eth domain name in MAINNET through a commit-reveal bid—I encourage you to consult the official documentation written by Nick Johnson, the creator of ENS. You can find it at https://docs.ens.domains/en/latest/.
A common use case for decentralized applications is to store a sequence of documents proving, for example, the provenance of goods traded through the applications. A typical example is diamonds, which traditionally are accompanied by paper certificates showing that they come from legitimate mines and traders. For more complex supply chains, such as in the field of international trade finance (https://en.wikipedia.org/wiki/Trade_finance), which involves multiple parties, such as a supplier, the bank of the supplier, a shipping company, an end client, and their bank, the paperwork might be more voluminous. Storing the equivalent electronic documentation directly on the blockchain would work but wouldn’t be ideal for a couple of reasons:
An alternative solution would be to store the electronic documentation on an off-blockchain database and include in the transaction only a cryptographic hash of each of the documents, to prove their content. This solution isn’t perfect, though, because the off-blockchain database would be a centralized resource not easily accessible by the Ethereum nodes. Even if the decentralized application could access the database where the documentation was stored, having this centralized repository would be contrary to the spirit of decentralized applications.
An ideal solution instead would be based on a decentralized storage repository. This is exactly what the Swarm platform, which is partially associated with Ethereum, aims to provide. Another valid alternative would be to use the existing IPFS distributed storage network. Let’s explore these two options.
Swarm is a content distribution platform whose main objective is to provide decentralized and redundant storage to Ethereum Dapps. It focuses specifically on holding and exposing smart contract data and code, as well as blockchain data.
Storage is decentralized in Swarm through a P2P network that makes it resistant to distributed denial of service (DDoS) attacks and censorship, and that provides fault tolerance and guarantees zero downtime because it has no single point of failure. The architecture of the P2P Swarm network, shown in figure 9.7, is similar to that of the Ethereum network: each node runs a Swarm client that manages local storage and communicates with its peer nodes through a common standard protocol called bzz. Currently, only one client implementation is available, written in the Go language, and it’s included in the Geth & Tools package you can download from the Go Ethereum website. The main difference with the Ethereum network is that all Ethereum nodes have the same copy of the blockchain database, whereas each Swarm node contains a different set of data, also illustrated in figure 9.7.
A Swarm node is linked to an Ethereum account known as a swarm base account. The (keccak 256-bit) hash of the address of the swarm base account determines the swarm base address, which is the address of a Swarm node within the Swarm network. A Swarm network is associated with a specific Ethereum network. For example, the main production Swarm network is associated with MAINNET, and a Swarm network is associated with the Ropsten Ethereum network. Because Swarm is part of the Ethereum technology stack, it makes full use of other components of the ecosystem, such as ENS.
When content is uploaded to Swarm, it’s broken down into 4 KB chunks that get scattered throughout the Swarm network. The upload process is illustrated in figure 9.8.
It involves the following steps:
The download process goes through a similar workflow, but in reverse order, as shown in figure 9.10:
From an operational point of view, the sustainability of the Swarm platform is based on monetary incentives aimed at encouraging and rewarding participants who provide the underlying storage resources. Storage is traded between participants who require it and those who provide it, so it tends to be allocated efficiently.
In this section, I’ll show you how to upload content to Swarm, get its root key, and then download it back from Swarm using the root key.
The first step you have to take is to download the Swarm client, swarm.exe, from the Go Ethereum website. If you downloaded geth from the Geth & Tools archive (or installer) link, you should already have swarm.exe in the same folder you’re running geth from. Otherwise, go back to the Go Ethereum website and download the Geth & Tools 1.8.12 package, which I believe is the latest archive still containing swarm.exe. Unzip it and copy swarm.exe into the same folder where you’ve placed geth.exe. In my case, I’ve placed it here: C:Program Filesgeth.
Now start up geth against TESTNET. (Remember to use the --bootnodes option if peer nodes aren’t located quickly, as you did at the start of chapter 8.) Type the following:
C:Program Filesgeth>geth --testnet
Geth will start synchronizing, as expected. From a separate command shell, start an interactive console:
C:Program Filesgeth>geth attach ipc:\.pipegeth.ipc
Then, from the interactive console, get the address of your testnet accounts[1]:
> eth.accounts[1] "0x4e6c30154768b6bc3da693b1b28c6bd14302b578"
You’ll run the Swarm client under this account by opening a new OS console and executing the following command from the folder where you placed the swarm executable (replacing your Ethereum testnet folder accordingly):
C:Program Filesgeth>swarm –datadir C:Users oberAppDataRoamingEthereum estnet --bzzaccount 0x4e6c30154768b6bc3da693b1b28c6bd14302b578
Table 9.2 explains the options I’ve used to start up the Swarm client.
Option |
Purpose |
---|---|
--datadir | Specifies the datadir path related to the environment to use—in our case, TESTNET (Ropsten) |
--bzzaccount | Specifies the Ethereum account to use—in our case, TESTNET accounts[1] |
As you can see in figure 9.11, you’ll be asked to unlock accounts[1] by providing its password. Enter the password, as requested, and the client will start up with output similar to that in the screenshot in figure 9.12.
It might take a few minutes before your Swarm client synchronizes with a number of peers (by default up to a maximum of 25). Output similar to the following indicates capable peers have been found:
INFO [03-11|19:49:47] Peer faa9a1ae is capable (0/3) INFO [03-11|19:49:47] found record <faa9a1aef3fb3b0792420a59f929907d86c0937d b9310d6835a46f44301faf05> in kaddb INFO [03-11|19:49:47] syncronisation request sent with address: 00000000 -00000000, index: 0-0, session started at: 0, last seen at: 0, latest key: 00000000 INFO [03-11|19:49:47] syncer started: address: -, index: 0-0, session started at: 933, last seen at: 0, latest key: INFO [03-11|19:49:47] syncer[faa9a1ae]: syncing all history complete INFO [03-11|19:49:50] Peer d3f2a5c8 is capable (0/3)
Now that you’re connected to the Swarm network, you can upload some sample text onto the network. Open a new OS console and submit this HTTP request to your Swarm client through curl:
C:Users ober>curl -H "Content-Type: text/plain" --data-binary "my sample text" http://localhost:8500/bzz:/
You’ll immediately get a response showing the root key associated with the submitted content:
eab8083835dec1952eae934eef05dda96dadbcd5d0685251e8c9faab1d0a0f58
To get the content back from Swarm, you can now submit a new request that includes the root key you obtained:
C:Users ober>curl http://localhost:8500/bzz:/eab8083835dec1952eae934eef05dda96dadbcd5d068 5251e8c9faab1d0a0f58/
As expected, you’ll get back the text you submitted earlier:
my sample text
The official documentation is an excellent resource to learn more about Swarm and to try out more advanced features: http://mng.bz/4OBv. But you also should be aware that the Swarm initiative has been criticized by some members of the decentralized web community for duplicating the effort of IPFS, a project with similar objectives but with a more general purpose. The following section explains IPFS and the reason for the controversy.
IPFS stands for InterPlanetary File System and, as you might guess from its name, is a hypermedia distribution protocol whose objective is to support a decentralized way of storing and sharing files. With Swarm, storage is distributed over a P2P network, which you can consider a distributed file system. The IPFS way of storing files provides the same benefits as the Swarm network, such as zero downtime and resistance to DDoS attacks and censorship.
Files aren’t stored in their entirety in a single network location; they’re broken down into blocks, which are then transformed into IPFS objects and scattered across the network. An IPFS object is a simple structure containing two properties
{ Data—Byte array containing data in binary form Links—Array of Link objects, which link to other IPFS objects }
where a Link object has the following structure:
{ Name—String representing the name of the link Hash—Hash of the linked IPFS object Size—Full size of linked IPFS document and all of its linked IPFS objects }
Each IPFS object is referenced by its hash.
An example of the IPFS object associated with a small file that’s decomposed into a single file block is shown in the following listing.
{ "Links":[], 1 "Data":"u0008u0002u0012u0019 This is some sample text.u0018u0019" 2 }
An example of the IPFS object associated with a large file, bigger than 256 KB and broken down into multiple blocks, is shown in the following listing.
{ "Links":[ 1 { "Name":"", 2 "Hash": "QmWXuN4UW2ZJ2fo5fj8xt7raMKvsihJJibUpwmtEhbHBci", 3 "Size":262158 4 }, { "Name":"", "Hash":"QmfHm32CQnagmHvNV5X715wxEEjgqADWpCeLPYvL9JNoMt", "Size":262158 }, . . . { "Name":"", "Hash":"QmXrgsJQGVxg7iH2tgQF8BV9dEhRrCVngc9tWg8VLFn7Es", "Size":13116 } ], "Data":"u0008u0002u0018@ u0010 u0010 u0010 u0010 f" }
Each block referenced in the Links array is represented by a document like that shown in listing 9.1. The workflow followed by an IPFS client for uploading a file on IPFS is illustrated in figure 9.13.
Let’s follow the steps of the upload workflow in detail:
Given that content is referenced by its hash, this design is focused on managing efficiently immutable files. For example, only one copy of a document has to exist in the network because its hash must point only to one network location, so duplication is eliminated.
But IPFS is also capable of managing mutable documents by tracking their changes through versioning. When a file changes, only its amended blocks need to be hashed, stored on the network, and indexed, and the unaffected blocks will be reused. The workflow of the download process is shown in figure 9.14.
Here are the steps of the workflow in detail:
Contrary to Swarm, IPFS gives no direct incentives to its P2P participants for contributing to the network’s file storage resources, and it relies on FileCoin, a separate but related initiative based on the Bitcoin blockchain, to reward active participants.
If you want to learn more about IPFS, download the client and give it a go at https://ipfs.io/docs/getting-started/. I recommend you also have a look at the Git book The Decentralized Web Primer, which has various tutorials on how to install an IPFS client and how to interact with the network and examine IPFS objects through common operations such as uploading and then downloading a file: http://mng.bz/QQxQ. (Click the green Read button to access the content.)
At this point, do you think Swarm is duplicating the effort of IPFS, as some members of the decentralized web have argued? Now that you know about both decentralized content management infrastructures, you can probably judge for yourself whether the Swarm initiative has been worthwhile. Table 9.3, which summarizes the main features of both platforms, might help you answer the question.
Feature |
Swarm |
IPFS |
---|---|---|
Storage architecture | Decentralized | Decentralized |
Network architecture | P2P | P2P |
Content Accessible Storage | Yes | Yes |
Block/chunk size | 4 KB | 256 KB |
Native integration with Ethereum | Yes | No |
Incentive strategy | Built-in | External (through FileCoin) |
Fans of the Swarm platform argue that its smaller chunk size, which allows much lower transmission latency, and its deeper integration with Ethereum are by themselves two key reasons for the existence of Swarm. You can find further analysis of the difference between Swarm and IPFS online on various forums, such as Ethereum stack exchange: http://mng.bz/Xg0p.
Conventional web applications consume a variety of external services, typically by performing REST API calls or invoking legacy web services. You might be surprised to hear that this isn’t possible in the Dapp world. By design, Ethereum contracts can’t access external sources. This is to avoid two main set of issues:
How do you get external data into your smart contract so you can work around the restrictions that the Ethereum infrastructure imposes and be confident about the data’s authenticity? You do it through oracles. An oracle is, in short, a bridge between the blockchain network and the outside world. It takes care of fetching the queried data from external data providers, and then it returns it to the requesting contract together with a proof of authenticity. Having the process arranged that way, you can see an oracle as a middleman that merely plays a facilitating role. Although it’s a point of centralization, the requesting contract doesn’t need to trust the oracle because the oracle can’t modify the data it’s returning without invalidating it against the proof of authenticity (which the end user can verify). Figure 9.15 shows the main components that are part of a typical oracle-based data-feeding solution:
You can use two main strategies for feeding an oracle so consumers can trust its data:
When independent participants feed an oracle, the approved data set is generated on a consensus basis, for example by averaging numeric values or selecting the most frequent non-numeric values. This way of feeding data, which happens in a decentralized fashion and is subject to a consensus, seems to naturally fit the spirit of decentralized applications. But it has various drawbacks:
When a single source feeds an oracle, it demonstrates that the exposed data is genuine and untampered with by returning it to a client together with a proof of authenticity document. Services such as TLSNotary can generate the document, and it’s based on various technologies, such as auditable virtual machines and trusted execution environments.
Oraclize offers one of the most popular frameworks for feeding smart contracts from a single source. This solution has two main advantages with respect to oracles fed by multiple participants:
Enough talking! I’ll now show you how to build your first oracle consumer contract.
If you want to hook a contract into Oraclize, you have to
Your contract, as illustrated in the sample oracle shown in listing 9.3 (which comes from the Oraclize documentation), should contain
pragma solidity ^0.4.0; import "github.com/oraclize/ethereum-api/ oraclizeAPI.sol"; 1 contract KrakenPriceTicker is usingOraclize { 2 string public ETHXBT; 3 event newOraclizeQuery(string description); 4 event newKrakenPriceTicker(string price); 5 function KrakenPriceTicker() { 6 oraclize_setProof(proofType_TLSNotary | proofStorage_IPFS); 7 update(); 8 } function __callback(bytes32 myid, string result, bytes proof) { 9 if (msg.sender != oraclize_cbAddress()) throw; ETHXBT = result; 10 newKrakenPriceTicker(ETHXBT); 11 update(); 12 } function update() payable { 13 if (oraclize.getPrice("URL") > this.balance) { 14 newOraclizeQuery("Oraclize query was NOT sent, please add some ETH to cover for the query fee"); } else { newOraclizeQuery("Oraclize query was sent, standing by for the answer.."); oraclize_query(60, "URL", "json(https://api.kraken.com/0/public/Ticker?pair=ETHXBT).result.XETHXXB T.c.0"); 15 } } }
Look closely at the data request to the Oraclize engine within the update method:
oraclize_query(60, "URL", "json(https://api.kraken.com/0/public/Ticker?pair=ETHXBT) .result.XETHXXBT.c.0");
The data request is performed by calling the oraclize_query() function, inherited from the usingOraclize base contract, with a set of parameters, as shown in figure 9.16:
As you can see in figure 9.16, results are extracted from the query through a result parser, which depends on the nature of the data source being called. Table 9.4 summarizes the supported parsers.
Parser type |
Parser identifier |
Description |
---|---|---|
JSON parser | json | Converts results to a JSON object, which you can extract specific properties from |
XML parser | xml | Typically parses legacy web service results |
HTML parser | html | Useful for HTML scraping |
Binary helper | binary | Can extract items from binary results with slice(offset, length) |
Now that you’ve learned how to perform a data request, we’ll look at how the Oraclize engine responds with the results. As you saw in figure 9.15, when processing a request, the Oraclize engine grabs the results from the relevant data source, and then it creates a result transaction, which it sends back to the Ethereum network. This transaction is also known as an Oraclize callback transaction, because during its execution, it calls back the oracle contract performing the request on its __callback function:
function __callback(bytes32 myid, string result, bytes proof) { if (msg.sender != oraclize_cbAddress()) throw; ETHXBT = result; newKrakenPriceTicker(ETHXBT); update(); }
When you call the __callback function from the result transaction, the value of the ETHXBT state variable is updated. You can then use it in the rest of the code.
If you want to run the data-aware contract from listing 9.3, you need the Oraclize Remix plugin, which directly references the oraclizeAPI.sol file, including using the Oraclize contract from GitHub: http://mng.bz/y1ry.
A dialog box will appear, warning that “Remix is going to load the extension “Oraclize” located at https://remix-plugin.oraclize.it. Are you sure to load this external extension?” Click OK.
I encourage you to try out the KrakenPriceTicker.sol, which you can find already set up within the Gist menu on the left side of the screen (Gist > KrakenPriceTicker). Before running it
After having done so, do the following:
Four categories of tools can improve the Dapp development cycle:
Development IDEs and development frameworks are tools that help you speed up the development cycle. Although IDEs and development frameworks offer similar functionality, the former are slightly more focused on code editing and compilation, whereas the latter offer powerful deployment capabilities.
For a few months, Ethereum Studio appeared to be the de facto IDE for developing Ethereum Dapps, because it provided good code editing capabilities coupled with Web3 integration and smooth contract deployment functionality. But then ether.camp, the company behind it, stopped supporting it. As a result, developers are advised to instead use generic code editing tools, such as Sublime, Atom, Visual Studio Code, Vi, and Emacs, configured with the related Solidity plugin.
The objective of Ethereum development frameworks is to streamline the development cycle and allow developers to focus on writing code rather than spending most of their time compiling it, redeploying it, and retesting it manually.
Various third-party smart contract frameworks have appeared since the launch of the Ethereum platform:
Truffle is probably the most advanced Ethereum development framework, and it focuses mainly on simplifying the building, testing, packaging, and deployment of Solidity contracts. It’s distributed as a Node.js package and provides a REPL console.
Truffle’s key selling point is migration—the way this framework manages the scripting and configuration of contract deployment. This is the framework you’ll use in the next few chapters.
Populus is functionally similar to Truffle in that it’s designed to simplify the compile-test-deploy cycle by working on smart contract projects organized with a specific folder structure. It provides configuration management that allows you to progress smoothly throughout development from an in-memory blockchain such as Ganache, to a private internal network, and finally to a public one. The peculiarity of Populus with respect to other frameworks is that it allows a developer to script unit tests or deployment instructions in python.
Embark aims to be a platform-agnostic Dapp framework to simplify the development and deployment of any decentralized application. This framework simplifies the management of multicontract Dapps, and you can configure it to automatically deploy a contract when a code change is detected. It allows decentralized storage through the IPFS protocol (and Swarm) and decentralized messaging through the Whisper protocol.
The Dapp framework is geared mainly toward the Linux world and is distributed through the Nix Package manager. The emphasis of this framework is on contract packaging under the Ethereum Smart Contract Packaging Specification (https://github.com/ethereum/EIPs/issues/190) and contract code storage decentralization through the IPFS protocol, which we examined in section 9.4. Dapp also provides a unit testing facility through ethrun and ds-test.
Deciding which development framework to adopt might be difficult, as all of them offer similar compile-test-deploy functionality, although delivered in slightly different ways. It might sound obvious, but the best way to determine which one suits your needs best is to try out all of them.
Given the bias of the Ethereum platform toward JavaScript, it’s natural that several generic JavaScript frameworks have become a common feature of the Ethereum ecosystem. Let’s see why you should consider including JavaScript testing and UI frameworks in your development environment.
Using a generic JavaScript testing framework, as opposed to the unit testing functionality that a main development framework (such as Truffle or embark) offers, provides
The two most popular JavaScript unit testing frameworks used for developing decentralized applications are Mocha and Jasmine, and you’ll be using Mocha in the next few chapters. Let’s now move on to the web UI frameworks.
Although the UI is an important element of a Dapp, because it connects the end user with the backend smart contracts, the Ethereum platform doesn’t yet fully support any technology to develop the presentation layer of an Ethereum application. Because you can include and reference web3.js on the JavaScript of a plain html5 web page, it’s natural to think that an easy way of exposing a Dapp is through web pages.
Given the abundance of excellent JavaScript UI frameworks, it’s hard to recommend any one framework in particular. But it’s worth mentioning that frameworks such as Meteor, Angular, Vue, and, more recently, React are getting increasing traction in the Ethereum community. As far as we’re concerned, we’ll stick to minimalistic solutions based on plain HTML and JavaScript, but feel free to embellish the UI code with the framework of your choice.