10.1. Extending MUSE

One key feature of the generalized sector’s implementation is that it should be easy to extend. As such, MUSE can be made to run custom python functions, as long as these inputs and output of the function follow a standard specific to each step. We will look at a few here.

Below is a list of possible hooks, referenced by their implementation in the MUSE model:

  • register_interaction_net in muse.interactions: a list of lists of agents that interact together.

  • register_agent_interaction in muse.interactions: Given a list of interacting agents, perform the interaction.

  • register_production in muse.production: A method to compute the production from a sector, given the demand and the capacity.

  • register_initial_asset_transform in muse.hooks: Allows any kind of transformation to be applied to the assets of an agent, prior to investing.

  • register_final_asset_transform in muse.hooks: After computing the investment, this sets the assets that will be owned by the agents.

  • register_demand_share in muse.demand_share: During agent investment, this is the share of the demand that an agent will try and satisfy.

  • register_filter in muse.filters: A filter to remove technologies from consideration, during agent investment.

  • register_objective in muse.objectives: A quantity which allows an agent to compare technologies during investment.

  • register_decision in muse.decisions: A transformation applied to aggregate multiple objectives into a single objective during agent investment, e.g. via a weighted sum.

  • register_investment in muse.investment: During agent investment, matches the demand for future investment using the decision metric above.

  • register_output_quantity in muse.output.sector: A sectorial quantity to output for postmortem analysis.

  • register_output_sink in muse.outputs: A place to store an output quantity, e.g. a file with a given format, a database on premise or on the cloud, etc…

  • register_cached_quantity in muse.outputs.cache: A global quantity to output for postmortem analysis.

  • register_carbon_budget_fitter in muse.carbon_budget

  • register_carbon_budget_method in muse.carbon_budget

  • register_sector: Registers a function that can create a sector from a muse configuration object.

10.1.1. Extending outputs

MUSE can be used to save custom quantities as well as data for analysis. There are two steps to this process:

  • Computing the quantity of interest

  • Store the quantity of interest in a sink

In practice, this means that we can compute any quantity, such as capacity or consumption of an energy source and save it to a csv file, or a netcdf file.

10.1.1.1. Output extension

To demonstrate this, we will compute a new edited quantity of consumption, then save it as a text file.

The current implementation of the quantity of consumption found in muse.outputs.sector filters out values of 0. In this example, we would like to maintain the values of 0, but do not want to edit the source code of MUSE.

This is rather simple to do using MUSE’s hooks.

First we create a new function called consumption_zero as follows:

[1]:
from typing import List, Optional, Text

from muse.outputs.sector import market_quantity, register_output_quantity
from xarray import DataArray, Dataset


@register_output_quantity
def consumption_zero(
    market: Dataset,
    capacity: DataArray,
    technologies: Dataset,
):
    """Current consumption."""
    result = (
        market_quantity(market.consumption, sum_over="timeslice", drop=None)
        .rename("consumption")
        .to_dataframe()
        .round(4)
    )
    return result

The function we created takes three arguments. These arguments (market, capacity and technology) are mandatory for the @register_output_quantity hook. Other hooks require different arguments.

Whilst this function is very similar to the consumption function in muse.outputs.sector, we have modified it slightly by allowing for values of 0.

The important part of this function is the @register_output_quantity decorator. This decorator ensures that this new quantity is addressable in the TOML file. Notice that we did not need to edit the source code to create our new function.

Next, we can create a sink to save the output quantity previously registered. For this example, this sink will simply dump the quantity it is given to a file, with the “Hello world!” message:

[2]:
from typing import Any

from muse.outputs.sinks import register_output_sink, sink_to_file


@register_output_sink(name="txt")
@sink_to_file(".txt")
def text_dump(data: Any, filename: Text) -> None:
    from pathlib import Path

    Path(filename).write_text(f"Hello world!\n\n{data}")

The code above makes use of two dectorators: @register_output_sink and @sink_to_file.

@register_output_sink registers the function with MUSE, so that the sink is addressable from a TOML file. The second one, @sink_to_file, is optional. This adds some nice-to-have features to sinks that are files. For example, a way to specify filenames and check that files cannot be overwritten, unless explicitly allowed to.

Next, we want to modify the TOML file to actually use this output type. To do this, we add a section to the output table:

[[sectors.residential.outputs]]
quantity = "consumption_zero"
sink = "txt"
filename = "{cwd}/{default_output_dir}/{Sector}{Quantity}{year}{suffix}"

The last line above allows us to specify the name of the file. We could also use sector above or quantity.

There can be as many sections of this kind as we like in the TOML file, which allow for multiple outputs.

Next, we first copy the default model provided with muse to a local subfolder called “model”. Then we read the settings.toml file and modify it using python. You may prefer to modify the settings.toml file using your favorite text editor. However, modifying the file programmatically allows us to routinely run this notebook as part of MUSE’s test suite and check that the tutorial it is still up to date.

[3]:
from pathlib import Path

from muse import examples
from toml import dump, load

model_path = examples.copy_model(overwrite=True)
settings = load(model_path / "settings.toml")
new_output = {
    "quantity": "consumption_zero",
    "sink": "txt",
    "overwrite": True,
    "filename": "{cwd}/{default_output_dir}/{Sector}{Quantity}{year}{suffix}",
}
settings["sectors"]["residential"]["outputs"].append(new_output)
dump(settings, (model_path / "modified_settings.toml").open("w"))
settings
-- 2024-04-25 14:55:14 - muse.sectors.register - INFO
Sector legacy registered.

-- 2024-04-25 14:55:14 - muse.sectors.register - INFO
Sector preset registered, with alias presets.

-- 2024-04-25 14:55:14 - muse.sectors.register - INFO
Sector default registered.

[3]:
{'time_framework': [2020, 2025, 2030, 2035, 2040, 2045, 2050],
 'foresight': 5,
 'regions': ['R1'],
 'interest_rate': 0.1,
 'interpolation_mode': 'Active',
 'log_level': 'info',
 'excluded_commodities': ['wind'],
 'equilibrium_variable': 'demand',
 'maximum_iterations': 1,
 'tolerance': 0.1,
 'tolerance_unmet_demand': -0.1,
 'outputs': [{'quantity': 'prices',
   'sink': 'aggregate',
   'filename': '{cwd}/{default_output_dir}/MCA{Quantity}.csv'},
  {'quantity': 'capacity',
   'sink': 'aggregate',
   'filename': '{cwd}/{default_output_dir}/MCA{Quantity}.csv',
   'index': False,
   'keep_columns': ['technology',
    'dst_region',
    'region',
    'agent',
    'sector',
    'type',
    'year',
    'capacity']}],
 'carbon_budget_control': {'budget': []},
 'global_input_files': {'projections': '{path}/input/Projections.csv',
  'global_commodities': '{path}/input/GlobalCommodities.csv'},
 'sectors': {'residential': {'type': 'default',
   'priority': 1,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/residential/Technodata.csv',
   'commodities_in': '{path}/technodata/residential/CommIn.csv',
   'commodities_out': '{path}/technodata/residential/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/residential/ExistingCapacity.csv',
     'lpsolver': 'scipy',
     'constraints': ['max_production',
      'max_capacity_expansion',
      'demand',
      'search_space'],
     'demand_share': 'new_and_retro',
     'forecast': 5}},
   'outputs': [{'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'quantity': 'capacity',
     'sink': 'csv',
     'overwrite': True,
     'index': False},
    {'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'sink': 'csv',
     'overwrite': True,
     'quantity': {'name': 'supply',
      'sum_over': 'timeslice',
      'drop': ['comm_usage', 'units_prices']}},
    {'quantity': 'consumption_zero',
     'sink': 'txt',
     'overwrite': True,
     'filename': '{cwd}/{default_output_dir}/{Sector}{Quantity}{year}{suffix}'}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'power': {'type': 'default',
   'priority': 2,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/power/Technodata.csv',
   'commodities_in': '{path}/technodata/power/CommIn.csv',
   'commodities_out': '{path}/technodata/power/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/power/ExistingCapacity.csv',
     'lpsolver': 'scipy'}},
   'outputs': [{'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'quantity': 'capacity',
     'sink': 'csv',
     'overwrite': True,
     'index': False}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'gas': {'type': 'default',
   'priority': 3,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/gas/Technodata.csv',
   'commodities_in': '{path}/technodata/gas/CommIn.csv',
   'commodities_out': '{path}/technodata/gas/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/gas/ExistingCapacity.csv',
     'lpsolver': 'scipy'}},
   'outputs': [{'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'quantity': 'capacity',
     'sink': 'csv',
     'overwrite': True,
     'index': False}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'residential_presets': {'type': 'presets',
   'priority': 0,
   'consumption_path': '{path}/technodata/preset/*Consumption.csv'}},
 'timeslices': {'level_names': ['month', 'day', 'hour'],
  'all-year': {'all-week': {'night': 1460,
    'morning': 1460,
    'afternoon': 1460,
    'early-peak': 1460,
    'late-peak': 1460,
    'evening': 1460}}}}

We can now run the simulation. There are two ways to do this. From the command-line, where we can do:

python3 -m muse data/commercial/modified_settings.toml

(note that slashes may be the other way on Windows). Or directly from the notebook:

[4]:
import logging

from muse.mca import MCA

logging.getLogger("muse").setLevel(0)
mca = MCA.factory(model_path / "modified_settings.toml")
mca.run();

We can now check that the simulation has created the files that we expect. We also check that our “Hello, world!” message has printed:

[5]:
all_txt_files = sorted((Path() / "Results").glob("Residential*.txt"))
assert "Hello world!" in all_txt_files[0].read_text()
all_txt_files
[5]:
[PosixPath('Results/ResidentialConsumption_Zero2020.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2025.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2030.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2035.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2040.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2045.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2050.txt')]

Our model output the files we were expecting and passed the assert statement, meaning that it could find the “Hello world!” messages in the outputs.

10.1.1.2. Cached quantities

The result of intermediate calculations are often useful for post-mortem analysis or simply to have a more detailed picture of the evolution of the calculation over time. The process of adding a new quantity to cache and output has three steps:

  1. Register the function with register_cached_quantity that will deal with the consolidation of the cached quantity prior to outputting in such a way it can be accepted by one of the sinks. It can also be used to modify what is saved, filtering by technologies or agents, for example.

  2. Cache the quantity in each iteration of the market using muse.outputs.cache.cache_quantity in the relevant part of your code.

  3. Indicate in the TOML file that you want to save that quantity, and where.

The last point is identical to requesting a sector quantity to be saved, already described in the previous section, but with information placed in the global section of the TOML file rather than within a sector.

All functions registered with register_investment or register_objective are automatically cached, i.e. cache_quantity is called within the hook taking as input the output of the investment or objective function. In particular, investment functions calculate both the capacity and the production after investment has been made. There is already registered a function that will deal with the cached capacity but here we we are going to register an alternative that will cache only the capacity related to assets present in retro agents:

[6]:
from typing import MutableMapping, Text

import pandas as pd
import xarray as xr
from muse.outputs.cache import consolidate_quantity, register_cached_quantity


@register_cached_quantity(overwrite=True)
def capacity(
    cached: List[xr.DataArray],
    agents: MutableMapping[Text, MutableMapping[Text, Text]],
) -> pd.DataFrame:
    """Consolidates the cached capacity into a single DataFrame to save.

    Args:
        cached (List[xr.DataArray]): The list of cached arrays during the calculation of
        the time period with the capacity.
        agents (MutableMapping[Text, MutableMapping[Text, Text]]): Agents' metadata.

    Returns:
        pd.DataFrame: DataFrame with the consolidated data for retro agents.
    """
    consolidated = consolidate_quantity("capacity", cached, agents)
    return consolidated.query("category == 'retrofit'")

The above function is nearly identical to muse.outputs.cache.capacity but filtering the output such that only information related to retorfit agents is included in the output. As a function with the same name intended to cache the capacity already exists, we have to set overwrite = True in the decorator, so that it replaces the built in version.

The consolidate_quantity function is a convenient tool to extract the last records from the list of cached DataArrays and put it together with the agent’s metadata in a DataFrame, but you can code your own solution to put together an output that the chosen sink can digest.

Next, we need to indicate in the TOML file that we want to cache that quantity. To do that, we write the following in the global section:

[[outputs_cache]]
quantity = "capacity"
sink = "aggregate"
filename = "{cwd}/{default_output_dir}/Cache{Quantity}.csv"
index = false

The aggregate sink already exist, so we do not need to create it. If you want to customise further how to save the data, create your own as described above.

The next steps are similar to those already described: create a modified settings file, run the simulation and check that the output we have created indeed is what we wanted.

[7]:
from pathlib import Path

from muse import examples
from toml import dump, load

model_path = examples.copy_model(overwrite=True)
settings = load(model_path / "settings.toml")
new_output = {
    "quantity": "capacity",
    "sink": "aggregate",
    "index": False,
    "filename": "{cwd}/{default_output_dir}/Cache{Quantity}.csv",
}
settings["outputs_cache"] = []
settings["outputs_cache"].append(new_output)
dump(settings, (model_path / "modified_settings.toml").open("w"))
settings
[7]:
{'time_framework': [2020, 2025, 2030, 2035, 2040, 2045, 2050],
 'foresight': 5,
 'regions': ['R1'],
 'interest_rate': 0.1,
 'interpolation_mode': 'Active',
 'log_level': 'info',
 'excluded_commodities': ['wind'],
 'equilibrium_variable': 'demand',
 'maximum_iterations': 1,
 'tolerance': 0.1,
 'tolerance_unmet_demand': -0.1,
 'outputs': [{'quantity': 'prices',
   'sink': 'aggregate',
   'filename': '{cwd}/{default_output_dir}/MCA{Quantity}.csv'},
  {'quantity': 'capacity',
   'sink': 'aggregate',
   'filename': '{cwd}/{default_output_dir}/MCA{Quantity}.csv',
   'index': False,
   'keep_columns': ['technology',
    'dst_region',
    'region',
    'agent',
    'sector',
    'type',
    'year',
    'capacity']}],
 'carbon_budget_control': {'budget': []},
 'global_input_files': {'projections': '{path}/input/Projections.csv',
  'global_commodities': '{path}/input/GlobalCommodities.csv'},
 'sectors': {'residential': {'type': 'default',
   'priority': 1,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/residential/Technodata.csv',
   'commodities_in': '{path}/technodata/residential/CommIn.csv',
   'commodities_out': '{path}/technodata/residential/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/residential/ExistingCapacity.csv',
     'lpsolver': 'scipy',
     'constraints': ['max_production',
      'max_capacity_expansion',
      'demand',
      'search_space'],
     'demand_share': 'new_and_retro',
     'forecast': 5}},
   'outputs': [{'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'quantity': 'capacity',
     'sink': 'csv',
     'overwrite': True,
     'index': False},
    {'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'sink': 'csv',
     'overwrite': True,
     'quantity': {'name': 'supply',
      'sum_over': 'timeslice',
      'drop': ['comm_usage', 'units_prices']}}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'power': {'type': 'default',
   'priority': 2,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/power/Technodata.csv',
   'commodities_in': '{path}/technodata/power/CommIn.csv',
   'commodities_out': '{path}/technodata/power/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/power/ExistingCapacity.csv',
     'lpsolver': 'scipy'}},
   'outputs': [{'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'quantity': 'capacity',
     'sink': 'csv',
     'overwrite': True,
     'index': False}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'gas': {'type': 'default',
   'priority': 3,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/gas/Technodata.csv',
   'commodities_in': '{path}/technodata/gas/CommIn.csv',
   'commodities_out': '{path}/technodata/gas/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/gas/ExistingCapacity.csv',
     'lpsolver': 'scipy'}},
   'outputs': [{'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'quantity': 'capacity',
     'sink': 'csv',
     'overwrite': True,
     'index': False}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'residential_presets': {'type': 'presets',
   'priority': 0,
   'consumption_path': '{path}/technodata/preset/*Consumption.csv'}},
 'timeslices': {'level_names': ['month', 'day', 'hour'],
  'all-year': {'all-week': {'night': 1460,
    'morning': 1460,
    'afternoon': 1460,
    'early-peak': 1460,
    'late-peak': 1460,
    'evening': 1460}}},
 'outputs_cache': [{'quantity': 'capacity',
   'sink': 'aggregate',
   'index': False,
   'filename': '{cwd}/{default_output_dir}/Cache{Quantity}.csv'}]}

We can now run the simulation. There are two ways to do this. From the command-line, where we can do:

python3 -m muse data/commercial/modified_settings.toml

(note that slashes may be the other way on Windows). Or directly from the notebook:

[8]:
import logging

from muse.mca import MCA

logging.getLogger("muse").setLevel(0)
mca = MCA.factory(model_path / "modified_settings.toml")
mca.run();

We can now check that the simulation has created the file that we expect: the cached capacity only for retrofit agents:

[9]:
cache_files = sorted((Path() / "Results").glob("Cache*"))
assert len(cache_files) == 1
cached = pd.read_csv(cache_files[0])
assert tuple(cached.category.unique()) == ("retrofit",)

10.1.2. Adding TOML parameters to the outputs

It would be useful if we could pass parameters from the TOML file to our new functions consumption_zero and text_dump. For example, in our previous iteration the consumption output was aggregating the data by "timeslice", by hardcoding the variable. We can pass a parameter which could do this by setting the sum_over parameter to be True. In addition, we could change the message output by a new text_dump function.

Not all hooks are this flexible (for historical reasons, rather than any intrinsic difficulty). However, for outputs, we can do this as follows:

[10]:
@register_output_quantity(overwrite=True)
def consumption_zero(  # noqa: F811
    market: Dataset,
    capacity: DataArray,
    technologies: Dataset,
    sum_over: Optional[List[Text]] = None,
    drop: Optional[List[Text]] = None,
    rounding: int = 4,
):
    """Current consumption."""
    result = (
        market_quantity(market.consumption, sum_over=sum_over, drop=drop)
        .rename("consumption")
        .to_dataframe()
        .round(rounding)
    )
    return result


@register_output_sink(name="txt", overwrite=True)
@sink_to_file(".txt")
def text_dump(data: Any, filename: Text, msg: Optional[Text] = "Hello, world!") -> None:  # noqa: F811
    from pathlib import Path

    Path(filename).write_text(f"{msg}\n\n{data}")

We simply added parameters as arguments to both of our functions: consumption_zero and text_dump.

Note: The overwrite argument allows us to overwrite previously defined registered functions. This is useful in a notebook such as this. But it should not be used in general. If overwrite were false, then the code would issue a warning and it would leave the TOML to refer to the original functions at the beginning of the notebook. This is useful when using custom modules.

Now we can modify the output section to take additional arguments:

[[sectors.commercial.outputs]]
quantity.name = "consumption_zero"
quantity.sum_over = "timeslice"
sink.name = "txt"
sink.filename = "{cwd}/{default_output_dir}/{Sector}{Quantity}{year}{suffix}"
sink.msg = "Hello, you!"
sink.overwrite = True

Here, we still want to use the consumption_zero function and the txt sink. But we would like to change the message from “Hello world!” to “Hello you!” within the TOML file.

Now, both sink and quantity are dictionaries which can take any number of arguments. Previously, we were using a shorthand for convenience. Again, we create a new settings file, and run this with our new parameters, which interface with our new functions.

[11]:
from pathlib import Path

from muse import examples
from toml import dump, load

model_path = examples.copy_model(overwrite=True)
settings = load(model_path / "settings.toml")
settings["sectors"]["residential"]["outputs"] = [
    {
        "quantity": {"name": "consumption_zero", "sum_over": "timeslice"},
        "sink": {
            "name": "txt",
            "filename": "{cwd}/{default_output_dir}/{Sector}{Quantity}{year}{suffix}",
            "msg": "Hello, you!",
            "overwrite": True,
        },
    }
]

dump(settings, (model_path / "modified_settings_2.toml").open("w"))
settings
[11]:
{'time_framework': [2020, 2025, 2030, 2035, 2040, 2045, 2050],
 'foresight': 5,
 'regions': ['R1'],
 'interest_rate': 0.1,
 'interpolation_mode': 'Active',
 'log_level': 'info',
 'excluded_commodities': ['wind'],
 'equilibrium_variable': 'demand',
 'maximum_iterations': 1,
 'tolerance': 0.1,
 'tolerance_unmet_demand': -0.1,
 'outputs': [{'quantity': 'prices',
   'sink': 'aggregate',
   'filename': '{cwd}/{default_output_dir}/MCA{Quantity}.csv'},
  {'quantity': 'capacity',
   'sink': 'aggregate',
   'filename': '{cwd}/{default_output_dir}/MCA{Quantity}.csv',
   'index': False,
   'keep_columns': ['technology',
    'dst_region',
    'region',
    'agent',
    'sector',
    'type',
    'year',
    'capacity']}],
 'carbon_budget_control': {'budget': []},
 'global_input_files': {'projections': '{path}/input/Projections.csv',
  'global_commodities': '{path}/input/GlobalCommodities.csv'},
 'sectors': {'residential': {'type': 'default',
   'priority': 1,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/residential/Technodata.csv',
   'commodities_in': '{path}/technodata/residential/CommIn.csv',
   'commodities_out': '{path}/technodata/residential/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/residential/ExistingCapacity.csv',
     'lpsolver': 'scipy',
     'constraints': ['max_production',
      'max_capacity_expansion',
      'demand',
      'search_space'],
     'demand_share': 'new_and_retro',
     'forecast': 5}},
   'outputs': [{'quantity': {'name': 'consumption_zero',
      'sum_over': 'timeslice'},
     'sink': {'name': 'txt',
      'filename': '{cwd}/{default_output_dir}/{Sector}{Quantity}{year}{suffix}',
      'msg': 'Hello, you!',
      'overwrite': True}}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'power': {'type': 'default',
   'priority': 2,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/power/Technodata.csv',
   'commodities_in': '{path}/technodata/power/CommIn.csv',
   'commodities_out': '{path}/technodata/power/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/power/ExistingCapacity.csv',
     'lpsolver': 'scipy'}},
   'outputs': [{'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'quantity': 'capacity',
     'sink': 'csv',
     'overwrite': True,
     'index': False}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'gas': {'type': 'default',
   'priority': 3,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/gas/Technodata.csv',
   'commodities_in': '{path}/technodata/gas/CommIn.csv',
   'commodities_out': '{path}/technodata/gas/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/gas/ExistingCapacity.csv',
     'lpsolver': 'scipy'}},
   'outputs': [{'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'quantity': 'capacity',
     'sink': 'csv',
     'overwrite': True,
     'index': False}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'residential_presets': {'type': 'presets',
   'priority': 0,
   'consumption_path': '{path}/technodata/preset/*Consumption.csv'}},
 'timeslices': {'level_names': ['month', 'day', 'hour'],
  'all-year': {'all-week': {'night': 1460,
    'morning': 1460,
    'afternoon': 1460,
    'early-peak': 1460,
    'late-peak': 1460,
    'evening': 1460}}}}

We then run the simulation again:

[12]:
mca = MCA.factory(model_path / "modified_settings_2.toml")
mca.run();

And we can check the parameters were used accordingly:

[13]:
all_txt_files = sorted((Path() / "Results").glob("Residential*.txt"))
assert len(all_txt_files) == 7
assert "Hello, you!" in all_txt_files[0].read_text()
all_txt_files
[13]:
[PosixPath('Results/ResidentialConsumption_Zero2020.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2025.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2030.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2035.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2040.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2045.txt'),
 PosixPath('Results/ResidentialConsumption_Zero2050.txt')]

Again, we can see that the number of output files generated were as we expected and that our new message “Hello, you!” was found within these files. This means that our output and sink functions worked as expected.

10.1.3. Where to store new functionality

As previously demonstrated, we can easily add new functionality to MUSE. However, running a jupyter notebook is not always the best approach. It is also possible to store functions in an arbitrary Python file, such as the following:

[14]:
%%writefile mynewfunctions.py
from typing import Any, Text

from muse.outputs.sinks import register_output_sink, sink_to_file


@register_output_sink(name="txt")
@sink_to_file(".txt")
def text_dump(data: Any, filename: Text) -> None:
    from pathlib import Path
    Path(filename).write_text(f"Hello world!\n\n{data}")

Overwriting mynewfunctions.py

We can then tell the TOML file where to find it:

plugins = "{cwd}/mynewfunctions.py"

[[sectors.commercial.outputs]]
quantity = "capacity"
sink = "dummy"
overwrite = true

Alternatively, plugin can also be given a list of paths rather than just a single one, as done below.

[15]:
settings = load(model_path / "settings.toml")
settings["plugins"] = ["{cwd}/mynewfunctions.py"]
settings["sectors"]["residential"]["outputs"] = [
    {"quantity": "capacity", "sink": "dummy", "overwrite": "true"}
]
dump(settings, (model_path / "modified_settings.toml").open("w"))
settings
[15]:
{'time_framework': [2020, 2025, 2030, 2035, 2040, 2045, 2050],
 'foresight': 5,
 'regions': ['R1'],
 'interest_rate': 0.1,
 'interpolation_mode': 'Active',
 'log_level': 'info',
 'excluded_commodities': ['wind'],
 'equilibrium_variable': 'demand',
 'maximum_iterations': 1,
 'tolerance': 0.1,
 'tolerance_unmet_demand': -0.1,
 'outputs': [{'quantity': 'prices',
   'sink': 'aggregate',
   'filename': '{cwd}/{default_output_dir}/MCA{Quantity}.csv'},
  {'quantity': 'capacity',
   'sink': 'aggregate',
   'filename': '{cwd}/{default_output_dir}/MCA{Quantity}.csv',
   'index': False,
   'keep_columns': ['technology',
    'dst_region',
    'region',
    'agent',
    'sector',
    'type',
    'year',
    'capacity']}],
 'carbon_budget_control': {'budget': []},
 'global_input_files': {'projections': '{path}/input/Projections.csv',
  'global_commodities': '{path}/input/GlobalCommodities.csv'},
 'sectors': {'residential': {'type': 'default',
   'priority': 1,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/residential/Technodata.csv',
   'commodities_in': '{path}/technodata/residential/CommIn.csv',
   'commodities_out': '{path}/technodata/residential/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/residential/ExistingCapacity.csv',
     'lpsolver': 'scipy',
     'constraints': ['max_production',
      'max_capacity_expansion',
      'demand',
      'search_space'],
     'demand_share': 'new_and_retro',
     'forecast': 5}},
   'outputs': [{'quantity': 'capacity', 'sink': 'dummy', 'overwrite': 'true'}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'power': {'type': 'default',
   'priority': 2,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/power/Technodata.csv',
   'commodities_in': '{path}/technodata/power/CommIn.csv',
   'commodities_out': '{path}/technodata/power/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/power/ExistingCapacity.csv',
     'lpsolver': 'scipy'}},
   'outputs': [{'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'quantity': 'capacity',
     'sink': 'csv',
     'overwrite': True,
     'index': False}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'gas': {'type': 'default',
   'priority': 3,
   'dispatch_production': 'share',
   'technodata': '{path}/technodata/gas/Technodata.csv',
   'commodities_in': '{path}/technodata/gas/CommIn.csv',
   'commodities_out': '{path}/technodata/gas/CommOut.csv',
   'subsectors': {'retro_and_new': {'agents': '{path}/technodata/Agents.csv',
     'existing_capacity': '{path}/technodata/gas/ExistingCapacity.csv',
     'lpsolver': 'scipy'}},
   'outputs': [{'filename': '{cwd}/{default_output_dir}/{Sector}/{Quantity}/{year}{suffix}',
     'quantity': 'capacity',
     'sink': 'csv',
     'overwrite': True,
     'index': False}],
   'interactions': [{'net': 'new_to_retro', 'interaction': 'transfer'}]},
  'residential_presets': {'type': 'presets',
   'priority': 0,
   'consumption_path': '{path}/technodata/preset/*Consumption.csv'}},
 'timeslices': {'level_names': ['month', 'day', 'hour'],
  'all-year': {'all-week': {'night': 1460,
    'morning': 1460,
    'afternoon': 1460,
    'early-peak': 1460,
    'late-peak': 1460,
    'evening': 1460}}},
 'plugins': ['{cwd}/mynewfunctions.py']}

10.1.4. Next steps

In the next section we will output a technology filter, to stop agents from investing in a certain technology, and a new metric to combine multiple objectives.