In the previous chapter, I gave you some advice on areas you should look at before deploying your Dapp on the production network. I believe security is such an important topic that it should be presented separately, so I’ve decided to dedicate this entire chapter to it.
I’ll start by reminding you of some limitations in the Solidity language that, if you overlook them, can become security vulnerabilities. Among these limitations, I’ll particularly focus on external calls and explain various risks you might face when executing them, but I’ll also try to give you some tips for avoiding or minimizing such risks. Finally, I’ll present classic attacks that might be launched against Ethereum Dapps so that you can avoid costly mistakes, especially when Ether is at stake.
You should pay attention to certain limitations in the Solidity language because they’re generally exploited as the first line of attack by malicious participants against unaware developers:
Some of these vulnerabilities, such as those around randomness, might have more severe consequences, such as losing Ether. Other vulnerabilities, such as those around gas limits, have less severe consequences; for example, they can be exploited for denial of service attacks that can only cause temporary malfunctions. Whether they seem severe or not, you shouldn’t underestimate any of these vulnerabilities.
As you already know, data stored on the blockchain is always public, regardless of the level of access of the contract state variables it has been stored against. For example, everybody can still see the value of a contract state variable declared as private.
If you need privacy, you need to implement a hash commit–reveal scheme like the one that the MAINNET ENS domain registration uses, as described in chapter 9, section 9.3.2. For example, to conceal a bid in an auction, the original value must not be submitted. Instead, the bid should be structured in two phases, as shown in figure 14.1:
If you want the state of your decentralized application to depend on randomness, you’ll face the same challenges associated with concealing private information that you saw in the previous section. The main concern is preventing miners from manipulating randomness to their advantage while also making sure the logic of your contract is executed exactly in the same way on all nodes.
Consequently, the way you should handle randomness in a Dapp should be similar to the commit–reveal scheme for private data. For example, as you can see in figure 14.2, in a decentralized roulette game the following should happen:
Defining a function as a view doesn’t guarantee that nothing can modify the contract state while you’re running it. The compiler doesn’t perform such a check (but version 0.5.0 or higher of the Solidity compiler will perform this check), nor does the EVM. The compiler will only return a warning. For example, if you define the authorize() function of SimpleCoin as
function authorize(address _authorizedAccount, uint256 _allowance) public view returns (bool success) { allowance[msg.sender][_authorizedAccount] = _allowance; return true; }
you’ll get the following warning, because the state of the allowance mapping is being modified:
Warning: Function declared as view, but this expression (potentially) modifies the state...
If the contract state gets modified, a transaction is executed (rather than a simple call), and this consumes gas. An attacker might take advantage of the fact that the contract owner didn’t foresee gas expenditure for this function and might cause consequences, ranging from a few transaction failures to sustained DoS attacks. To avoid
this mistake, you should pay attention to compiler warnings and make sure you rectify the code accordingly.
As you know, to be processed successfully, a transaction must not exceed the block gas limit that the sender set. If a transaction fails because it hits the gas limit, the state of the contract is reverted, but the sender must pay transaction costs, and they don’t get refunded.
The gas limit that the transaction sender sets can be favorable or detrimental to security, depending on how it’s used. Here are the two extreme cases, as illustrated in figure 14.3:
In general, the best advice is to set the lowest possible gas limit that allows all genuine transactions to be completed against the expected logic. But it’s hard to nail a reasonable gas estimate that’s safe for both completion and security.
For example, if the logic executing a transaction includes loops, the transaction sender might decide to set a relatively high gas limit to cover the eventuality of a high number of loops. But it would be difficult to figure out in advance whether the gas limit would be hit, especially if the number of loops was determined dynamically and depended on state variables. If any of the state variables were subject to user input, an attacker could manipulate them so that the number of loops became very big, and the transaction would be more likely to run out of gas. Trying to bypass this problem by setting a very high gas limit defeats the purpose of the limit itself and isn’t the right solution. In the next few sections, we’ll explore correct solutions.
Calls to external contracts introduce several potential threats to your application. This section will help you avoid or minimize them.
The first word of advice is to avoid calling external contracts if you can use alternative solutions. External calls transfer the flow of your logic to untrusted parties that might be malicious, even if only indirectly. For example, even if the external contract you’re calling isn’t malicious, its functions might in turn call a malicious contract, as shown in figure 14.4.
Because you lose direct control of the execution of your logic, you’re exposed to attacks based on race conditions and reentrancy, which we’ll examine later. Also, after the external call is complete, you must be careful about how to handle return values, especially in case of exceptions. But often, even if it might feel risky, you have no choice but to interact with external contracts, for example, at the beginning of a new project, when you want to make quick progress by taking advantage of tried and tested components. In that case, the safest approach is to learn about the related potential pitfalls and write your code to prevent them—read on!
You can perform external calls to invoke a function on an external contract or to send Ether to an external contract. It’s also possible to invoke the execution of code while simultaneously transferring Ether. Table 14.1 summarizes the characteristics of each way of performing an external call.
Both send() and call() are becoming obsolete starting with version 0.5 of the Solidity compiler.
Purpose |
External function called |
Throws exception |
Execution context |
Message object |
Gas limit |
|
---|---|---|---|---|---|---|
externalContractAddress .send(etherAmount) | Raw Ether transfer | Fallback | No | N/A | N/A | 2,300 |
externalContractAddress .transfer(etherAmount) | Safe Ether transfer | Fallback | Yes | N/A | N/A | 2,300 |
externalContractAddress .call.(bytes4(sha3( "externalFunction()"))) | Raw function call in context of external contract | Specified function | No | External contract | Original msg. | Gas limit of orig. call |
externalContractAddress .callcode.(bytes4(sha3( "externalFunction()"))) | Raw function call in context of caller | Specified function | No | Calling contract | New msg. created by caller | Gas limit of orig. call |
externalContractAddress .delegatecall.(bytes4(sha3("externalFunction()"))) | Raw function call in context of caller | Specified function | No | Calling contract | Original msg. | Gas limit of orig. call |
ExternalContract(external ContractAddress).externalFunction() | Safe function call | Specified function | Yes | External contract | Original msg. | Gas limit of orig. call |
When using call(), callcode(), and delegatecall(), you can transfer Ether simultaneously with the call invocation by specifying the Ether amount with the value keyword, as follows:
externalContractAddress.call.value(etherAmount)( bytes4(sha3("externalFunction()")
It’s also possible to transfer Ether without calling any function, in this way:
externalContractAddress.call.value(etherAmount)()
This way of sending Ether has advantages and disadvantages: it allows the recipient to have a more complex fallback function, but for the same reason, it exposes the sender to malicious manipulation. As you can understand from table 14.1, various aspects of external calls might affect security when performing an external call:
Let’s examine them one by one.
It’s possible to group call execution types into two sets: those only allowing you to transfer Ether and those allowing you to call any function.
The send() and transfer() calls can only call (implicitly) the fallback() function of the external contract, as shown here:
contract ExternalContract { ... function payable() { } 1 } contract A { ... function doSomething() { ... externalContractAddress.send(etherAmount); } }
In general, a fallback function can contain logic of any complexity. But send() and transfer()impose a gas limit of 2,300 on the execution of the fallback function by transferring only a budget of 2,300 to the external function. This is such a low gas limit that, aside from transferring Ether, the fallback function can only perform a logging operation.
The low limit reassures the sender against potential reentrancy attacks (which I’ll describe shortly). It does so because when the control flow is transferred to the fallback function, the external contract isn’t able to perform any operations other than accepting the Ether transfer. On the other hand, this means you can’t use send() and transfer()if you need to execute any substantial logic around the Ether payment. If you haven’t fully understood this point yet, don’t worry: it’ll become clear when you learn about reentrancy attacks in the pages that follow.
All other execution types can invoke custom external functions while transferring Ether to the external contract. The downside of such flexibility is that although you can associate logic of any complexity with an Ether transfer (thanks to a gas limit that can be as high as the sender wishes), the risk of a malicious manipulation of the external call, and consequently of diversion of Ether, is also higher.
As I explained earlier, it’s possible to also purely transfer Ether without calling any function, as follows:
externalContractAddress.call.value(etherAmount)()
This way of sending Ether has the advantages and disadvantages associated with external calls through call(). The unrestricted gas limit on call() allows the recipient to have a more complex fallback function that can also access contract state, but for the same reason, it exposes the sender to potential malicious external manipulation.
The safest way to make an Ether transfer is to execute it through send() or transfer() and consequently to have it completely decoupled from any business logic.
From the point of view of the behavior when errors occur in external calls, you can divide call execution types into raw and safe. I’ll discuss those types here and then move on to discuss the different contexts that you can execute calls in.
Most call execution types are considered raw because if the external function throws an exception, they return a Boolean false result but don’t revert the contract state automatically. Consequently, if no action is taken after the unsuccessful call, the state of the calling contract may be incorrect, and any Ether you sent will be lost without having produced the expected economic benefit. Here are some examples of unhandled external calls:
externalContractAddress.send(etherAmount); 1 externalContractAddress.call.value(etherAmount)(); 2 externalContractAddress.call.value(etherAmount)( bytes4(sha3("externalFunction()"); 3
Following the external call, you have two ways to revert the contract state if the call fails. You can use require or revert().
The first way to manually revert the state if errors occur is to introduce require() conditions on some of the state variables. You must design the require() condition to fail if the external call fails, so the contract state gets reverted automatically, as shown in this snippet:
contract ExternalContract { ... function externalFunction payable (uint _input) { ... 1 } } contract A { ... function doSomething () { uint stateVariable; uint initialBalance = this.Balance; uint commission = 60; externalContractAddress.call.value(commission)( bytes4(sha3("externalFunction()")), 10); 2 require(this.Balance == initialBalance - commission); 3 require(stateVariable == expectedValue); 3 } }
The second way is to perform explicit checks followed by a call to revert() if the checks are unsuccessful, as shown in this code:
contract ExternalContract { ... function externalFunction payable (uint _input) { ... 1 } } contract A { uint stateVariable; ... function doSomething () { uint initialBalance = this.Balance; uint commission = 60; if (!externalContractAddress.call.value(commission)( bytes4(sha3("externalFunction()")), 10)) revert(); } ...
Two types of external calls are considered safe in that the failure of the external call propagates the exception to the calling code, and this reverts the contract state and Ether balance. The first type of safe call is
externalContractAddress.transfer(etherAmount); 1
The second type of safe call is a high-level call to the external contract:
ExternalContract(externalContractAddress) .externalFunction(); 1
Favor safe calls through transfer(), for transferring Ether, or through direct high-level custom contract functions, for executing logic. Avoid unsafe calls such as send() for sending Ether and call() for executing logic. If a safe call fails, the contract state will be reverted cleanly, whereas if an unsafe call fails, you’re responsible for handling the error and reverting the state.
You can execute a call in the context of the calling contract, which means it affects (and uses) the state of the calling contract. You can also execute it in the context of the external contract, which means it affects (and uses) the state of the external contract.
If you scan through the values in the execution context column of table 14.1, you’ll realize that most call execution types involve execution in the context of the external contract. The code in the following listing shows an external call taking place in the context of the external contract.
contract A { uint value; 1 address msgSender; 1 address externalContractAddress = 0x5; function setValue(uint _value) { externalContractAddress.call.( bytes4(sha3("setValue()")), _value); 2 } } contract ExternalContract { uint value; 3 address msgSender; 3 function setValue(uint _value) { value = _value; 4 msgSender = msg.sender; 5 } }
Through the example illustrated in figure 14.5, I’ll show you how the state of ContractA and ExternalContract change following the external call implemented in listing 14.1.
The addresses of the user and contract accounts used in the example are summarized in table 14.2.
Account |
Address |
---|---|
user1 | 0x1 |
user2 | 0x2 |
ContractA | 0x3 |
ContractB | 0x4 |
ExternalContract | 0x5 |
The initial state of the contracts before the external call takes place is summarized in table 14.3. The state of the contracts will change to that shown in table 14.4.
|
ContractA |
ExternalContract |
---|---|---|
Value | 16 | 24 |
Now imagine user1 performs the following call on ContractA; for example, from a web UI, through Web3.js, as you saw in chapter 12:
ContractA.setValue(33)
|
ContractA |
ExternalContract |
---|---|---|
Value | 16 | 33 |
msg sender | 0x1 | 0x1 |
In summary, the state of ContractA hasn’t changed, whereas the state of ExternalContract has been modified, as shown in table 14.4. The msg object that ExternalContract handles is the original msg object that user1 generated while calling ContractA.
Execution through delegatecall takes place in context of the calling contract. The code in the following listing shows an external call taking place in the context of the external contract.
contract A { uint value; 1 address msgSender; 1 address externalContractAddress = 0x5; function setValue(uint _value) { externalContractAddress.delegatecall.( bytes4(sha3("setValue()")), _value); 2 } } contract ExternalContract { uint value; 3 address msgSender; 3 function setValue(uint _value) { value = _value; 4 msgSender = msg.sender; 5 } }
As I did earlier, through the example illustrated in figure 14.6, I’ll show you how the state of ContractA and ExternalContract change following the external call implemented in listing 14.2.
The initial state of the contracts before the external call takes place is summarized in table 14.5.
|
ContractA |
ExternalContract |
---|---|---|
Value | 16 | 24 |
Now imagine user1 performs the following call on ContractA, for example, from a web UI:
ContractA.setValue(33)
In summary, the state of ContractA has been modified, whereas the state of External-Contract hasn’t changed, as shown in table 14.6. The msg object that External-Contract handles is still the original msg object that user1 generated while calling ContractA.
|
ContractA |
ExternalContract |
---|---|---|
Value | 33 | 24 |
msg sender | 0x1 | 0x1 |
The last case to examine is when the implementation of ContractA.setValue() uses callcode rather than delegatecall, as shown here:
function setValue(uint _value) { externalContractAddress.callcode.(bytes4(sha3("setValue()")), _value); }
Assuming the same initial state as before, after user1’s call, illustrated in figure 14.7, the state of the contracts will be that shown in table 14.7.
ContractA |
ExternalContract |
|
---|---|---|
Value | 33 | 24 |
msg sender | 0x1 | 0x3 |
As you can see, an external function that callcode calls is still executed in the context of the caller, as when the call was performed through delegatecall. But ContractA generates a new msg object when the external call takes place, and the message sender is ContractA.
From a security point of view, execution in the context of the external contract is clearly safer than in the context of the calling contract. When execution takes place in the context of the calling contract, such as when calling external functions through callcode or delegatecall, the caller is allowing the external contract read/write access to its state variables. As you can imagine, it’s safe to do so only in limited circumstances-, mainly when the external contract is under your direct control (for example, you’re the contract owner). You can find a summary of the context and the msg object used for each call type in table 14.8.
Call type |
Execution context |
msg object |
---|---|---|
call | External contract | Original msg object |
delegatecall | Caller contract | Original msg object |
callcode | Caller contract | Caller contract-generated msg object |
If you need to use call(), favor calls through call(), in the context of the external contract rather than in the context of the calling contract. Bear in mind, though, that call() will become obsolete starting with version 0.5 of Solidity.
In general, a message object is supposed to flow from its point of creation up to the last contract of an external-call chain, which might span several contracts. This is true when invoking external calls through all external call types, apart from callcode, which generates a new message instead, as you saw when comparing the external call execution under callcode and delegatecall. The delegatecall opcode was introduced as a bug fix for the unwanted message-creating behavior of callcode. Consequently, you should avoid using callcode if possible.
Avoid callcode if possible and choose delegatecall instead.
Apart from send() and transfer(), which impose a gas limit of 2,300 gas on the external call that’s only sufficient to perform an Ether transfer and a log operation, all the other external call types transfer to the external call the full gas limit present in the original call. As I explained previously, both low and high gas limits have security implications, but when it comes to transferring Ether, a lower limit is preferable because it prevents external manipulation when Ether is at stake.
Favor a lower gas limit over a higher gas limit.
You should now have a better idea of the characteristics and tradeoffs associated with each external call type, and you might be able to choose the most appropriate one for your requirements. But even if you pick the correct call type, you might end up in trouble if you don’t use it correctly. In this section, I’ll show you some techniques for performing external calls safely. You’ll see how even performing an Ether transfer through the apparently safe and inoffensive transfer() can end up in a costly mistake if you don’t think through all the scenarios that could lead your call to fail.
Imagine you’ve developed an auction Dapp and you’ve implemented an Auction contract like the one shown in the open source Ethereum Smart Contract Best Practices guide coordinated by ConsenSys,[1] which I’ve provided in the following listing. Have a good look at this listing, because I’ll reference it a few times in this chapter.
See “Recommendations for Smart Contract Security in Solidity,” http://mng.bz/MxXD, licensed under Apache License, Version 2.0.
contract Auction {//INCORRECT CODE //DO NOT USE!//UNDER APACHE LICENSE 2.0 // Copyright 2016 Smart Contract Best Practices Authors address highestBidder; uint highestBid; function bid() payable { require(msg.value >= highestBid); 1 if (highestBidder != 0) { highestBidder.transfer(highestBid); 2 } highestBidder = msg.sender; 3 highestBid = msg.value; 3 } }
What happens if one of the bidders has implemented a fallback, as shown in the following listing, and then they submit a bid higher than the highest one?
contract MaliciousBidder { address auctionContractAddress = 0x123; function submitBid() public { auctionContractAddress.call.value( 100000000000)(bytes4(sha3("bid()"))); } function payable() { revert (); 1 } ... }
As soon as the MaliciousBidder contract submits the highest bid through submitBid(), Auction.bid() refunds the previous highest bidder then sets the address and value of the highest bid to those of the MaliciousBidder. So far, so good. What happens next?
A new bidder now makes the highest bid. Auction.bid() will consequently try to refund MaliciousBidder, but the following line of code fails, even if the new bidder has done nothing wrong and the logic of the bid() function seems correct:
highestBidder.transfer(highestBid);
This line fails because the current highestBidder is still the address of MaliciousBidder, and its fallback, which highestBidder.transfer() calls, throws an exception.If you think about it, no new bidder will ever be able to pass this line, because a refund to MaliciousBidder will be attempted on every occasion. Also, the call to highestBidder.transfer() will keep failing before the address and value of a new highest bid can ever be updated, as illustrated in figure 14.8. That’s why MaliciousBidder is . . . malicious!
What about replacing transfer() with send()? An exception will be thrown in the bid() function following a failure in send(). As a result, using send() instead of transfer() in the recommended way, as shown in the following line of code, doesn’t solve the problem:
require(highestBidder.send(highestBid));
With your current bid() implementation, you don’t even need a malicious external bidder contract to end up in trouble. Also, unintentional exceptions that are thrown by any external bidding contract that has a faulty fallback() can rock the boat. For example, a sloppy developer of a bidder contract, unaware of the gas limitations associated with transfer() (or send()) might have decided to implement a complex fallback function, such as the one shown in the code that follows, that accepts the refund and processes it by modifying its own contract state. That would consequently blow the transfer() 2,300 gas stipend and almost immediately throw a “ran out of gas” exception:
function () payable() { refunds += msg.value; 1 }
As you can see, the current implementation of bid() relies heavily on the assumption that you’re dealing with honest and competent external contract developers. That might not always be the case.
A safer way to accept a bid is to separate the logic that updates the highest bidder from the execution of the refund to the previous highest bidder. The refund will no longer be pushed automatically to the previous highest bidder but should now be pulled with a separate request by them, as shown in the following listing. (This solution also comes from the ConsenSys guide I mentioned earlier.)
//UNDER APACHE LICENSE 2.0 //Copyright 2016 Smart Contract Best Practices Authors //https://consensys.github.io/smart-contract-best-practices/ contract Auction { address highestBidder; uint highestBid; mapping(address => uint) refunds; function bid() payable external { require(msg.value >= highestBid); if (highestBidder != 0) { refunds[highestBidder] += highestBid; 1 } highestBidder = msg.sender; 2 highestBid = msg.value; 2 } function withdrawRefund() external { uint refund = refunds[msg.sender]; refunds[msg.sender] = 0; msg.sender.transfer(refund); 3 } } }
Pull payments also come in handy in case the function that makes a payment performs a number of payments in a loop. An example would be a function that refunds all the accounts of the investors in an unsuccessful crowdsale, as shown in the following listing.
contract Crowdsale { address[] investors; mapping(address => uint) investments; function refundAllInvestors() payable onlyOwner external { //INCORRECT CODE //DO NOT USE! for (int i =0; i< investors.length; ++i) { investors[i].send(investments[investors[i]]); } }
If an attacker makes very small investments from a very high number of accounts, the number of items in the investors array might become so big that the for loop will run out of gas before completing, because each step of the loop has a fixed gas cost. This is a form of DoS attack exploiting gas limits. A safer implementation is to keep only the refund assignment in refundAllInvestors() and to move the Ether transfer operation into a separate pull payment function called withdrawalRefund(). This is similar to the one you saw earlier in the Auction contract, as you can see in the following listing.
contract Crowdsale { address[] investors; mapping(address => uint) investments; mapping(address => uint) refunds; function refundAllInvestors() payable onlyOwner external { for (int i =0; i< investors.length; ++i) { refunds[investors [i]] = investments[i]; investments[investors[i]] = 0; } } function withdrawRefund() external { uint refund = refunds[msg.sender]; refunds[msg.sender] = 0; msg.sender.transfer(refund); } }
Although pull payments are a good solution from the point of view of the contract that’s transferring Ether out, now put yourself in the shoes of the bidder. If you’re expecting Ether from an external contract, such as the Auction contract, don’t assume the external contract is implementing safe pull-payment functionality, as shown in listing 14.7. Assume instead that the external contract has been implemented in a suboptimal way, as in listing 14.6, the initial implementation you looked at. In this case, if you want to make sure the refund operation executed with transfer() (or send()) succeeds, you must provide a minimal fallback function: empty or at most with a single log operation, as shown here:
function() public payable {}
Unfortunately, you can’t make sure your contract doesn’t receive Ether from unknown sources. You might think that having a fallback that always throws an exception or reverts the state of your contract when called, as shown here, should be sufficient to stop this undesired inflow of Ether:
function() public payable {revert ();}
But I’m afraid there’s a way to transfer Ether to any address that doesn’t require any payable function on the receiving side—not even a fallback function. This can be achieved by calling
selfdestruct(recipientAddress);
The selfdestruct() function was introduced to provide a way to destroy a contract in case of emergency, and with the same operation, to transfer all the Ether associated with the contract account to a specified address. Typically, this would be executed when a critical bug was discovered or when a contract was being hacked.
Unfortunately, selfdestruct() also lends itself to misuse. If an external contract contains at least 1 Wei and self-destructs, targeting the address of your contract, there isn’t much you can do. You might think receiving unwanted Ether wouldn’t be a serious issue, but if the logic of your contract depends on checks and reconciliations performed on the Ether balance, for example, through require(), you might be in trouble.
Now that we’ve reviewed Solidity’s known security weak spots associated with external calls, it’s time to analyze known attacks that have taken place exploiting such weaknesses. You can group attacks on Solidity contracts into three broad categories, depending on the high-level objective of the attacker. The objective can be to
Table 14.9 summarizes manipulation techniques associated with each attack category. The next few sections will define and present in detail each attack technique included in the table.
Attack objective |
Attack strategy |
Attack technique |
---|---|---|
Individual transaction manipulation | Race condition | Reentrancy, cross-function race condition |
Favoring one transaction over others | Front-running | Front-running |
Making contract unusable | Denial of service | Fallback calling revert(), exploiting gas limits |
This section only covers the most common attacks, mainly to give you an idea of how malicious participants can manipulate a contract. Also, new security attacks are continuously discovered, so you must learn about and constantly keep up to date with the latest security breaches by consulting the official Solidity documentation and the many other websites and blogs that cover the topic. I’ll point you to some resources in section 14.5.
Reentrancy attacks target functions containing an external call and exploit a race condition between simultaneous calls to this function caused by the possible time lag that takes place during the external call. The objective of the attack is generally to manipulate the state of the contract, often having to do with an Ether or custom cryptocurrency balance, by calling back the targeted function many times simultaneously while the attacker hijacks the execution of the external call. If we go back to the example of the auction Dapp I showed you earlier, an attacker could launch a reentrancy attack on an incorrect implementation of withdrawRefund() by requesting a refund many times in parallel while hijacking each refund call, as illustrated in figure 14.9.
The following code shows an incorrect implementation of withdrawRefund() (also from the ConsenSys guide) that will put your contract in danger:
function withdrawRefund() external { {//INCORRECT CODE //DO NOT USE! // UNDER APACHE LICENSE 2.0 // Copyright 2016 Smart Contract Best Practices Authors uint refund = refunds[msg.sender]; require (msg.sender.call.value(refund)()); 1 refunds[msg.sender] = 0; 2 }
As I mentioned, an attacker contract might call withdrawRefund() several times while hijacking each external call to the fallback function that enables the payment, as shown here:
contract ReentrancyAttacker { function() payable public () { uint maxUint = 2 ** 256 - 1; for (uint I = 0; i < maxUint; ++i) { for (uint j =0; j < maxUint; ++j) { for (uint k =0; k < maxUint; ++k) 1 { ... } }
Such a slow execution of the Ether transfer would prevent withdrawRefund() from reaching the code line that clears the caller balance for a long time:
refunds[msg.sender] = 0;
Until this line is reached, various Ether transfers might take place, each equal to the amount owed to the caller. As a result, the caller could receive more Ether than they’re owed, as shown in the sequence diagram in figure 14.10.
The reason why I wanted to include in this chapter the auction Dapp from the ConsenSys guide, and particularly its incorrect implementation of the withdrawRefund() function, is that this code shows one of the vulnerabilities that contributed to the initial success of the DAO attack.
You can prevent this attack using a couple of methods:
function withdrawRefund() external { uint refund = refunds[msg.sender]; refunds[msg.sender] = 0; require (msg.sender.call.value(refund)()); 1 }
You’ve learned that reentrancy attacks exploit a race condition between simultaneous calls to the same function. But an attacker can also exploit race conditions between simultaneous calls on separate functions that try to modify the same contract state—for example, the Ether balance of a specific account.
A cross-race condition also could happen on SimpleCoin. Recall SimpleCoin’s transfer() function:
function transfer(address _to, uint256 _amount) public { require(coinBalance[msg.sender] > _amount); require(coinBalance[_to] + _amount >= coinBalance[_to] ); coinBalance[msg.sender] -= _amount; coinBalance[_to] += _amount; emit Transfer(msg.sender, _to, _amount); }
Now, imagine you decided to provide a withdrawFullBalance() function, which closed the SimpleCoin account of the caller and sent them back the equivalent amount in Ether. If you implemented this function as follows, you’d expose the contract to a potential cross-function race condition:
function withdrawFullBalance() public {//INCORRECT CODE //DO NOT USE! uint amountToWithdraw = coinBalance[msg.sender] * exchangeRate; require(msg.sender.call.value( amountToWithdraw)()); 1 coinBalance[msg.sender] = 0; }
A cross-function race condition attack works in a similar way to the reentrancy attack shown earlier. An attacker would first call withdrawFullBalance() and, while they were hijacking the external call from their fallback function, as shown in the following code, they’d call transfer() to move the full SimpleCoin balance to another address they own before the execution of withdrawBalance()cleared this balance. In this way, they’d both keep the full SimpleCoin balance and get the equivalent Ether amount:
contract RaceConditionAttacker { function() payable public () { uint maxUint = 2 ** 256 - 1; for (uint I = 0; i < maxUint; ++i) { for (uint j =0; j < maxUint; ++j) { for (uint k =0; k < maxUint; k++) { ... } }
The solution is, as was the case for the reentrancy attack, to replace call.value()() with send() or transfer(). You would also need to make sure the external call that performs the balance withdrawal takes place in the last line of the function, after the caller balance has already been set to 0:
function withdrawFullBalance() public { uint amountToWithdraw = coinBalance[msg.sender]; coinBalance[msg.sender] = 0; require(msg.sender.call.value( amountToWithdraw)()); 1 }
More complex cases of reentrancy involve call chains spanning several contracts. The general recommendation is always to place external calls or calls to functions containing external calls at the end of the function body.
The attacks based on race conditions you’ve seen so far try to manipulate the outcome of a transaction by altering its expected execution flow, generally by hijacking the part of the execution that takes place externally.
Other attack strategies work at a higher level and target decentralized applications for which the ordering of the execution of the transactions is important. Attackers try to influence the timing or ordering of transaction executions by favoring and prioritizing certain transactions over others. For example, a malicious miner might manipulate a decentralized stock market-making application by creating new buy order transactions when detecting in the memory pool many buy orders for a certain stock. The miner would then include only their own buy order transactions on the new block, so their transactions would get executed before any other similar order present in the memory pool, as illustrated in figure 14.11. If the miner’s PoW was successful, their buy order would become official. Subsequently, the stock price would rise because of the many buy orders that have been submitted but not executed yet. This would generate an instant profit for the miner.
This manipulation is an example of front running, which is the practice of malicious stock brokers who place the execution of their own orders ahead of those of their clients. A way to avoid this attack is to design the order clearing logic on batch execution rather than individual execution, with an implementation similar to batch auctions. With this setup, the auction contract collects all bids and then determines the winner with a single operation. Another solution is to implement a commit–reveal scheme similar to that described earlier in this chapter to disguise order information.
Some attacks aim to bring down a contract completely. These are known as denial of service (DoS) attacks.
As I’ve already shown you in the Auction contract, at the beginning of this chapter, an attacker could make a contract unusable by implementing the following fallback and then calling the targeted contract in such a way that it triggers an incoming payment:
function payable() { revert (); 1 }
If the targeted contract implements a function as shown here, it will become unusable as soon as it tries to send Ether to the attacker:
function bid() payable {//INCORRECT CODE//DO NOT USE! //UNDER APACHE LICENSE 2.0 //Copyright 2016 Smart Contract Best Practices Authors require(msg.value >= highestBid); 1 if (highestBidder != 0) { highestBidder.transfer(highestBid)); 2 } highestBidder = msg.sender; 3 highestBid = msg.value; 3 }
As you already know, you can avoid this attack by implementing a pull payment facility rather than an automated push payment. (See section 14.3.1 for more details.)
In the section on pull payments, you saw the example of an incorrectly implemented function that refunds all the accounts of the investors in an unsuccessful crowdsale:
contract Crowdsale { address[] investors; mapping(address => uint) investments; function refundAllInvestors() payable onlyOwner external { //INCORRECT CODE //DO NOT USE! for (int i =0; i< investors.length; ++i) { investors[i].send(investments[investors[i]]); } }
I already warned you that this implementation lends itself to manipulation by an attacker who makes very small investments from a very high number of accounts. The high number of for loops required by the large investments array will damage the contract permanently, because any invocation of the function will blow the gas limit. This is a form of DoS attack exploiting gas limits. Refunds based on pull payment functionality also prevent this attack.
You’ve learned about the pitfalls associated with external calls and how to avoid the most common forms of attack. Now I’ll close the chapter by sharing with you some security recommendations I’ve been collecting over time from various sources.
The official Solidity documentation has an entire section dedicated to security considerations and recommendations,[2] which I invite you to consult before deploying your contract on public networks. Other excellent resources are available, such as the open source Ethereum Smart Contract Best Practices guide (http://mng.bz/dP4g), initiated and maintained by the Diligence (https://consensys.net/diligence/) division of ConsenSys, which focuses on security and aims at raising awareness around best practices in this field. This guide, which I’ve referenced in various places in this chapter, is widely considered to be the main reference on Ethereum security. I’ve decided to adopt its terminology to make sure you can look up concepts easily, if you decide you want to learn more about anything I’ve covered here.
See “Security Considerations” at http://mng.bz/a7o9.
In table 14.10, I’ve listed an additional set of useful free resources on Ethereum security that ConsenSys Diligence has created. Presentations and posts by Christian Reitwiessner,[3] the head of Solidity at Ethereum, are also a must-read.
See “Smart Contract Security,” June 10,2016, http://mng.bz/GWxA, and “How to Write Safe Smart Contracts,” November 10, 2015, http://mng.bz/zMx6.
Resource |
Description |
---|---|
Secure smart contract philosophy1 | Series of Medium articles written by ConsenSys Diligence on how to approach smart contract security |
EIP 1470: SWC2 | Standardized weakness classification for smart contracts, so tool vendors and security practitioners can classify weaknesses in a more consistent way |
0x security audit report3 | Full security audit of ConsenSys 0x smart contract system, carried out by ConsenSys Diligence. This gives a good idea of the weaknesses assessed during a thorough security audit. |
Audit readiness guide4 | Guidelines on how to prepare for a smart contract security audit |
1. See “Building a Philosophy of Secure Smart Contracts,” http://mng.bz/ed5G. | |
2. See these GitHub pages: https://github.com/ethereum/EIPs/issues/1469andhttp://mng.bz/pgVR. | |
3. See http://mng.bz/O2Ej. | |
4. See Maurelian’s “Preparing for a Smart Contract Code Audit,” September 6, 2017, at http://mng.bz/YPqj. |
I’ll summarize in a short list the most important points all the resources I’ve mentioned tend to agree on. I reiterate, though, that it’s important to constantly keep up to date with the latest security exploits and discovered vulnerabilities on sites such as http://hackingdistributed.com, https://cryptonews.com, or https://cryptoslate.com. Here’s the list:
See Bernhard Mueller, “MythX Is Upping the Smart Contract Security Game,” http://mng.bz/0WmE, for an introduction to Mythril.
See “Introducing Panvala,” http://mng.bz/K1Mg, for an introduction to Panvala.