Model methods for client interaction

We have seen the most important model methods used to generate recordsets and how to write on them. But there are a few more model methods available for more specific actions, as shown here:

  • read([fields]): This is similar to browse, but instead of a recordset, it returns a list of rows of data with the fields given as it's argument. Each row is a dictionary. It provides a serialized representation of the data that can be sent through RPC protocols and is intended to be used by client programs and not in server logic.
  • search_read([domain], [fields], offset=0, limit=None, order=None): This performs a search operation followed by a read on the resulting record list. It is intended to be used by RPC clients and saves them the extra round trip needed when doing a search first and then a read.
  • load([fields], [data]): This is used to import data acquired from a CSV file. The first argument is the list of fields to import, and it maps directly to a CSV top row. The second argument is a list of records, where each record is a list of string values to parse and import, and it maps directly to the CSV data rows and columns. It implements the features of CSV data import described in Chapter 4, Data Serialization and Module Data, like the External IDs support. It is used by the web client Import feature. It replaces the deprecated import_data method.
  • export_data([fields], raw_data=False): This is used by the web client Export function. It returns a dictionary with a data key containing the data–a list of rows. The field names can use the .id and /id suffixes used in CSV files, and the data is in a format compatible with an importable CSV file. The optional raw_data argument allows for data values to be exported with their Python types, instead of the string representation used in CSV.

The following methods are mostly used by the web client to render the user interface and perform basic interaction:

  • name_get(): This returns a list of (ID, name) tuples with the text representing each record. It is used by default to compute the display_name value, providing the text representation of relation fields. It can be extended to implement custom display representations, such as displaying the record code and name instead of only the name.
  • name_search(name='', args=None, operator='ilike', limit=100): This also returns a list of (ID, name) tuples, where the display name matches the text in the name argument. It is used by the UI while typing in a relation field to produce the list suggested records matching the typed text. It is used to implement product lookup both by name and by reference while typing in a field to pick a product.
  • name_create(name): This creates a new record with only the title name to use for it. It is used by the UI for the quick-create feature, where you can quickly create a related record by just providing its name. It can be extended to provide specific defaults while creating new records through this feature.
  • default_get([fields]): This returns a dictionary with the default values for a new record to be created. The default values may depend on variables such as the current user or the session context.
  • fields_get(): This is used to describe the model's field definitions, as seen in the View Fields option of the developer menu.
  • fields_view_get(): This is used by the web client to retrieve the structure of the UI view to render. It can be given the ID of the view as an argument or the type of view we want using view_type='form'. Look at an example of this: rset.fields_view_get(view_type='tree').

Overriding the default methods

We have learned about the standard methods provided by the API. But what we can do with them doesn't end there! We can also extend them to add custom behavior to our models.

The most common case is to extend the create() and write() methods. This can be used to add the logic triggered whenever these actions are executed. By placing our logic in the appropriate section of the custom method, we can have the code run before or after the main operations are executed.

Using the TodoTask model as an example, we can make a custom create(), which would look like this:

@api.model
def create(self, vals):
    # Code before create
    # Can use the `vals` dict
    new_record = super(TodoTask, self).create(vals)
    # Code after create
    # Can use the `new` record created
    return new_record

A custom write() would follow this structure:

@api.multi
def write(self, vals):
    # Code before write
    # Can use `self`, with the old values
    super(TodoTask, self).write(vals)
    # Code after write
    # Can use `self`, with the new (updated) values
    return True

These are common extension examples, but of course any standard method available for a model can be inherited in a similar way to add to it our custom logic.

These techniques open up a lot of possibilities, but remember that other tools are also available that are better suited for common specific tasks and should be preferred:

  • To have a field value calculated based on another, we should use computed fields. An example of this is to calculate a total when the values of the lines are changed.
  • To have field default values calculated dynamically, we can use a field default bound to a function instead of a scalar value.
  • To have values set on other fields when a field is changed, we can use on-change functions. An example of this is when picking a customer to set the document's currency to the corresponding partner's, which can afterwards be manually changed by the user. Keep in mind that on-change only works on form view interaction and not on direct write calls.
  • For validations, we should use constraint functions decorated with @api.constrains(fld1,fld2,...). These are like computed fields but are expected to raise errors when conditions are not met instead of computing values.

Model method decorators

During our journey, the several methods we encountered used API decorators like @api.one. These are important for the server to know how to handle the method. We have already given some explanation of the decorators used; now let's recap the ones available and when they should be used:

  • @api.one: This feeds one record at a time to the function. The decorator does the recordset iteration for us and self is guaranteed to be a singleton. It's the one to use if our logic only needs to work with each record. It also aggregates the return values of the function on each record in a list, which can have unintended side effects.
  • @api.multi: This handles a recordset. We should use it when our logic can depend on the whole recordset and seeing isolated records is not enough, or when we need a return value that is not a list like a dictionary with a window action. In practice it is the one to use most of the time as @api.one has some overhead and list wrapping effects on result values.
  • @api.model: This is a class-level static method, and it does not use any recordset data. For consistency, self is still a recordset, but its content is irrelevant.
  • @api.returns(model): This indicates that the method return instances of the model in the argument, such as res.partner or self for the current model.

The decorators that have more specific purposes that were explained in detail in Chapter 5, Models – Structuring Application Data are shown here:

  • @api.depends(fld1,...): This is used for computed field functions to identify on what changes the (re)calculation should be triggered.
  • @api.constrains(fld1,...): This is used for validation functions to identify on what changes the validation check should be triggered.
  • @api.onchange(fld1,...): This is used for on-change functions to identify the fields on the form that will trigger the action.

In particular the on-change methods can send a warning message to the user interface. For example, this could warn the user that the product quantity just entered is not available on stock, without preventing the user from continuing. This is done by having the method return a dictionary describing the following warning message:

        return {
            'warning': {
                'title': 'Warning!',
                'message': 'The warning text'}
        }

Debugging

We all know that a good part of a developer's work is to debug code. To do this we often make use of a code editor that can set breakpoints and run our program step by step. Doing so with Odoo is possible, but it has it's challenges.

If you're using Microsoft Windows as your development workstation, setting up an environment capable of running Odoo code from source is a nontrivial task. Also the fact that Odoo is a server that waits for client calls and only then acts on them makes it quite different to debug compared to client-side programs.

While this can certainly be done with Odoo, arguably it might not be the most pragmatic approach to the issue. We will introduce some basic debugging strategies, which can be as effective as many sophisticated IDEs with some practice.

Python's integrated debug tool pdb can do a decent job at debugging. We can set a breakpoint by inserting the following line in the desired place:

import pdb; pdb.set_trace()

Now restart the server so that the modified code is loaded. As soon as the program execution reaches that line, a (pdb) Python prompt will be shown in the terminal window where the server is running, waiting for our input.

This prompt works as a Python shell, where you can run any expression or command in the current execution context. This means that the current variables can be inspected and even modified. These are the most important shortcut commands available:

  • h: This is used to display a help summary of the pdb commands.
  • p: This is used to to evaluate and print an expression.
  • pp: This is for pretty print, which is useful for larger dictionaries or lists.
  • l: This lists the code around the instruction to be executed next.
  • n (next): This steps over to the next instruction.
  • s (step): This steps into the current instruction.
  • c (continue): This continues execution normally.
  • u(up): This allows to move up the execution stack.
  • d(down): This allows to move down the execution stack.

The Odoo server also supports the --debug option. If it's used, when the server finds an exception, it enters into a post mortem mode at the line where the error was raised. This is a pdb console and it allows us to inspect the program state at the moment where the error was found.

It's worth noting that there are alternatives to the Python built-in debugger. There is pudb that supports the same commands as pdb and works in text-only terminals, but uses a more friendly graphical display, making useful information readily available such as the variables in the current context and their values.

Debugging

It can be installed either through the system package manager or through pip, as shown here:

$ sudo apt-get install python-pudb  # using OS packages
$ pip install pudb  # using pip, possibly in a virtualenv

It works just like pdb; you just need to use pudb instead of pdb in the breakpoint code.

Another option is the Iron Python debugger ipdb, which can be installed by using the following code:

$ pip install ipdb

Sometimes we just need to inspect the values of some variables or check if some code blocks are being executed. A Python print statement can perfectly do the job without stopping the execution flow. As we are running the server in a terminal window, the printed text will be shown in the standard output. But it won't be stored to the server log if it's being written to a file.

Another option to keep in mind is to set debug level log messages at sensitive points of our code if we feel that we might need them to investigate issues in a deployed instance. It would only be needed to elevate that server logging level to DEBUG and then inspect the log files.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset