12. API

MUSE model

12.1. Market Clearing Algorithm

12.1.1. Main MCA

class FindEquilibriumResults(converged, market, sectors)[source]

Result of find equilibrium.

converged

Alias for field number 0

market

Alias for field number 1

sectors

Alias for field number 2

class MCA(sectors, market, outputs=None, outputs_cache=None, time_framework=[2010, 2020, 2030, 2040, 2050, 2060, 2070, 2080, 2090], equilibrium=True, equilibrium_variable='demand', maximum_iterations=3, tolerance=0.1, tolerance_unmet_demand=-0.1, excluded_commodities=None, carbon_budget=None, carbon_price=None, carbon_commodities=None, debug=False, control_undershoot=True, control_overshoot=True, carbon_method='fitting', method_options=None)[source]

Market Clearing Algorithm.

The market clearing algorithm is the main object implementing the MUSE model. It is responsible for orchestrating how the sectors are run, how they interface with one another, with the general market and the carbon market.

calibrate_legacy_sectors()[source]

Run a calibration step in the legacy sectors Run historical years

classmethod factory(settings)[source]

Loads MCA from input settings and input files.

Parameters:

settings – namedtuple with the global MUSE input settings.

Returns:

The loaded MCA

find_equilibrium(market, sectors=None, maxiter=None)[source]

Specialised version of the find_equilibrium function.

Parameters:

market – Commodities market, with the prices, supply, consumption and demand.

Returns:

A tuple with the updated market (prices, supply, consumption and demand) and sector.

run()[source]

Initiates the calculation, starting with the loop over years.

This method starts the main MUSE loop, going over the years of the simulation. Internally, it runs the carbon budget loop, which updates the carbon prices, if needed, and the equilibrium loop, which tries to reach an equilibrium between prices, demand and supply.

Returns:

None

update_carbon_budget(market, year_idx)[source]

Specialised version of the update_carbon_budget function.

Parameters:
  • market – Commodities market, with the prices, supply, consumption and demand.

  • year_idx – Index of the year of interest.

Returns:

An updated market with prices, supply, consumption and demand.

update_carbon_price(market)[source]

Calculates the updated carbon price, if required.

If the emission calculated for the next time period is larger than the limit, then the carbon price needs to be updated to ensure that whatever the sectors do, the carbon budget limit is not exceeded.

Parameters:

market – Market, with the prices, supply, consumption and demand.

Returns:

The new carbon price or None

class SingleYearIterationResult(market, sectors)[source]

Result of iterating over sectors for a year.

Convenience tuple naming naming the return values from of single_year_iteration().

market

Alias for field number 0

sectors

Alias for field number 1

check_demand_fulfillment(market, tol)[source]

Checks if the supply will fulfill all the demand in the future.

If it does not, it logs a warning.

Parameters:
  • market – Commodities market, with the prices, supply, consumption and demand.

  • tol – Tolerance for the unmet demand.

Returns:

True if the supply fulfils the demand; False otherwise

check_equilibrium(market, int_market, tolerance, equilibrium_variable, year=None)[source]

Checks if equilibrium has been reached.

This function checks if the difference in either the demand or the prices between iterations if smaller than certain tolerance. If is, then it is assumed that the process has converged.

Parameters:
  • market – The market values in this iteration.

  • int_market – The market values in the previous iteration.

  • tolerance – Tolerance for reaching equilibrium.

  • equilibrium_variable – Variable to use to calculate the equilibrium condition.

  • year – year for which to check changes. Default to minimum year in market.

Returns:

True if converged, False otherwise.

find_equilibrium(market, sectors, maxiter=3, tol=0.1, equilibrium_variable='demand', tol_unmet_demand=-0.1, excluded_commodities=None, equilibrium=True)[source]

Runs the equilibrium loop.

If convergence is reached, then the function returns the new market. If the maximum number of iterations are reached, then a warning issued in the log and the function returns with the current status.

Parameters:
  • market – Commodities market, with the prices, supply, consumption and demand.

  • sectors – A list of the sectors participating in the simulation.

  • maxiter – Maximum number of iterations.

  • tol – Tolerance for reaching equilibrium.

  • equilibrium_variable – Variable to use to calculate the equilibrium condition.

  • tol_unmet_demand – Tolerance for the unmet demand.

  • excluded_commodities – Commodities to be excluded in check_demand_fulfillment

  • equilibrium – if equilibrium should be reached. Useful to testing.

Returns:

A tuple with the updated market (prices, supply, consumption and demand), sectors, and convergence status.

single_year_iteration(market, sectors)[source]

Runs one iteration of the sectors (runs each sector once).

Parameters:
  • market – An initial market with prices, supply, consumption.

  • sectors – A list of the sectors participating in the simulation.

Returns:

A tuple with the new market and sectors.

12.1.2. Carbon Budget

CARBON_BUDGET_FITTERS = {'Exponential': <function exponential>, 'Linear': <function linear>, 'exponential': <function exponential>, 'linear': <function linear>}

Dictionary of carbon budget fitters.

CARBON_BUDGET_FITTERS_SIGNATURE

carbon budget fitters signature.

alias of Callable[[ndarray, ndarray, int], float]

CARBON_BUDGET_METHODS = {'Bisection': <function bisection>, 'Fitting': <function fitting>, 'bisection': <function bisection>, 'fitting': <function fitting>}

Dictionary of carbon budget methods checks.

CARBON_BUDGET_METHODS_SIGNATURE

carbon budget fitters signature.

alias of Callable[[Dataset, list, Callable, DataArray, DataArray], float]

bisect_loop(market, sectors, equilibrium, commodities, new_price)[source]

Calls market euilibrium iteration in bisection. This updates emissions during iterations.

Parameters:
  • market – Market, with the prices, supply, consumption and demand,

  • sectors – List of sectors,

  • equilibrium – Method for searching maerket equilibrium,

  • commodities – List of carbon-related commodities,

  • new_price – New carbon price from bisection,

Returns:

Emissions estimated at the new carbon price.

bisection(market, sectors, equilibrium, carbon_budget, carbon_price, commodities, sample_size=2, refine_price=True, price_too_high_threshold=10, fitter='slinear')[source]

Applies bisection algorithm to escalate carbon price and meet the budget. A carbon market is meant as a pool of emissions for all the modelled regions; therefore, the carbon price applies to all modelled regions. Bisection applies an iterative estimations of the emissions varying the carbon price until convergence or stop criteria are met. Builds on ‘register_carbon_budget_method’.

Parameters:
  • market – Market, with the prices, supply, consumption and demand,

  • sectors – List of sectors,

  • equilibrium – Method for searching maerket equilibrium,

  • carbon_budget – DataArray with the carbon budget,

  • carbon_price – DataArray with the carbon price,

  • commodities – List of carbon-related commodities,

  • sample_size – Number of iterations for bisection,

  • refine_price – Boolean to decide on whether carbon price should be refined,

  • price_too_high_threshold – Threshold to decide what is a price too high.

  • fitter – Interpolation method.

Returns:

Value of global carbon price

create_sample(carbon_price, current_emissions, budget, size=4)[source]

Calculates a sample of carbon prices to estimate the adjusted carbon price.

For each of these prices, the equilibrium loop will be run, obtaining a new value for the emissions. Out of those price-emissions pairs, the final carbon price will be estimated.

Parameters:
  • carbon_price – Current carbon price,

  • current_emissions – Current emissions,

  • budget – Carbon budget,

  • size – Number of points in the sample.

Returns:

An array with the sample prices.

exp_guess_and_weights(prices, emissions, budget)[source]

Estimates initial values for the exponential fitting algorithm and the weights.

The points closest to the budget are used to estimate the initial guess. They also have the highest weight.

Parameters:
  • prices – An array with the sample carbon prices,

  • emissions – An array with the corresponding emissions,

  • budget – The carbon budget for the time period.

Returns:

The initial guess and weights

exponential(prices, emissions, budget)[source]

Fits the prices-emissions pairs to an exponential function.

Once that is done, an optimal carbon price is estimated

Parameters:
  • prices – An array with the sample carbon prices,

  • emissions – An array with the corresponding emissions,

  • budget – The carbon budget for the time period.

Returns:

The optimal carbon price.

fitting(market, sectors, equilibrium, carbon_budget, carbon_price, commodities, sample_size=4, refine_price=True, price_too_high_threshold=10, fitter='slinear')[source]

Used to solve the carbon market: given the emission of a period, adjusts carbon price to meet the budget. A carbon market is meant as a pool of emissions for all the modelled regions; therefore, the carbon price applies to all modelled regions. The method solves an equation applying a fitting of the emission-carbon price relation :param market: Market, with the prices, supply, and consumption, :param sectors: list of market sectors, :param equilibrium: Method for searching maerket equilibrium, :param carbon_budget: limit on emissions, :param carbon_price: current carbon price :param commodities: list of commodities to limit (ie. emissions), :param sample_size: sample size for fitting, :param refine_price: if True, performs checks on estimated carbon price, :param price_too_high_threshold: threshold on carbon price, :param fitter: method to fit emissions with carbon price.

Returns:

adjusted carbon price to meet budget

Return type:

new_price

linear(prices, emissions, budget)[source]

Fits the prices-emissions pairs to a linear function.

Once that is done, an optimal carbon price is estimated

Parameters:
  • prices – An array with the sample carbon prices,

  • emissions – An array with the corresponding emissions,

  • budget – The carbon budget for the time period.

Returns:

The optimal carbon price.

linear_guess_and_weights(prices, emissions, budget)[source]

Estimates initial values for the linear fitting algorithm and the weights. The points closest to the budget are used to estimate the initial guess. They also have the highest weight.

Parameters:
  • prices – An array with the sample carbon prices,

  • emissions – An array with the corresponding emissions,

  • budget – The carbon budget for the time period.

Returns:

The initial guess and weights

min_max_bisect(low, lb, up, ub, market, sectors, equilibrium, commodities, threshold)[source]

Refines bisection algorithm to escalate carbon price and meet the budget. As emissions can be a discontinuous function of the carbon price, this method is used to improve the solution search when discountinuities are met, improving the bounds search.

Parameters:
  • low – Value of carbon price at lower bound,

  • lb – Value of emissions at lower bound,

  • up – Value of carbon price at upper bound,

  • ub – Value of emissions at upper bound,

  • market – Market, with the prices, supply, consumption and demand,

  • sectors – List of sectors,

  • equilibrium – Method for searching maerket equilibrium,

  • commodities – List of carbon-related commodities,

  • sample_size – Number of iterations for bisection,

  • refine_price – Boolean to decide on whether carbon price should be refined,

  • threshold – Threshold to decide what is a price too high.

Returns:

Value of lower and upper global carbon price

refine_new_price(market, historic_price, carbon_budget, sample, price, commodities, price_too_high_threshold)[source]

Refine the value of the carbon price to ensure it is not too high or low compared to heuristics values :param market: Market, with prices, supply, and consumption, :param historic_price: DataArray with the historic carbon prices, :param carbon_budget: DataArray with the carbon budget, :param sample: Sample carbon price points, :param price: Current carbon price, to be refined, :param commodities: List of carbon-related commodities, :param price_too_high_threshold: Threshold to decide what is a price too high.

Returns:

A refined carbon price.

register_carbon_budget_fitter(function=None)[source]

Decorator to register a carbon budget function.

register_carbon_budget_method(function=None)[source]

Decorator to register a carbon budget function.

update_carbon_budget(carbon_budget, emissions, year_idx, over=True, under=True)[source]

Adjust the carbon budget in the far future if emissions too high or low. This feature can allow to simulate overshoot shifting. :param carbon_budget: budget for future year, :param emissions: emission for future year, :param year_idx: index of year for estimation, :param over: if True, allows overshoot, :param under: if True, allows undershoot.

Returns:

An adjusted threshold for the future year

12.2. Sectors and associated functionality

Define a sector, e.g. aggregation of agents.

There are three main kinds of sectors classes, encompassing three use cases:

  • Sector: The main workhorse sector of the model. It contains only on kind of data, namely the agents responsible for holding assets and investing in new assets.

  • PresetSector: A sector that is meant to generate demand for the sectors above using a fixed formula or schedule.

  • LegacySector: A wrapper around the original MUSE sectors.

All the sectors derive from AbstractSector. The AbstractSector defines two abstract functions which should be declared by derived sectors. Abstract here means a common programming practice where some concept in the code (e.g. a sector) is given an explicit interface, with the goal of making it easier for other programmers to use and implement the concept.

  • AbstractSector.factory(): Creates a sector from input data

  • AbstractSector.next(): A function which takes a market (demand, supply, prices) and returns a market. What happens within could be anything, though it will likely consists of dispatch and investment.

New sectors can be registered with the MUSE input files using muse.sectors.register.register_sector().

@register_sector(sector_class=None, name=None)[source]

Registers a sector so it is available MUSE-wide.

Example

>>> from muse.sectors import AbstractSector, register_sector
>>> @register_sector(name="MyResidence")
... class ResidentialSector(AbstractSector):
...     pass

12.2.1. AbstractSector

class AbstractSector[source]

Abstract base class for sectors.

Sectors are part of type hierarchy with AbstractSector at the apex: all sectors should derive from AbstractSector directly or indirectly.

MUSE only requires two things of a sector. Sector should be instanstiable via a factory() function. And they should be callable via next().

AbstractSector declares an interface with these two functions. Sectors which derive from it will be warned if either method is not implemented.

abstract classmethod factory(name, settings)[source]

Creates class from settings named-tuple.

abstract next(mca_market)[source]

Advance sector by one time period.

12.2.2. Sector

class Sector(name, technologies, subsectors=[], timeslices=None, technodata_timeslices=None, interactions=None, interpolation='linear', outputs=None, supply_prod=None)[source]

Base class for all sectors.

property agents

Iterator over all agents in the sector.

property capacity

Aggregates capacity across agents.

The capacities are aggregated leaving only two dimensions: asset (technology, installation date, region), year.

static convert_market_timeslice(market, timeslice, intensive='prices')[source]

Converts market from one to another timeslice.

classmethod factory(name, settings)[source]

Creates class from settings named-tuple.

property forecast

Maximum forecast horizon across agents.

If no agents with a “forecast” attribute are found, defaults to 5. It cannot be lower than 1 year.

interactions

Interactions between agents.

Called right before computing new investments, this function should manage any interactions between agents, e.g. passing assets from new agents to retro agents, and maket make-up from retro to new.

Defaults to doing nothing.

The function takes the sequence of agents as input, and returns nothing. It is expected to modify the agents in-place.

interpolation

Interpolation method and arguments when computing years.

market_variables(market, technologies)[source]

Computes resulting market: production, consumption, and costs.

name

Name of the sector.

next(mca_market, time_period=None, current_year=None)[source]

Advance sector by one time period.

Parameters:
  • mca_market – Market with demand, supply, and prices.

  • time_period – Length of the time period in the framework. Defaults to the range of mca_market.year.

Returns:

A market containing the supply offered by the sector, it’s attendant consumption of fuels and materials and the associated costs.

outputs

A function for outputting data for post-mortem analysis.

subsectors

Subsectors controlled by this object.

supply_prod

Computes production as used to return the supply to the MCA.

It can be anything registered with @register_production.

technologies

Parameters describing the sector’s technologies.

timeslices

Timeslice at which this sector operates.

If None, it will operate using the timeslice of the input market.

12.2.3. Subsector

class Subsector(agents, commodities, demand_share=None, constraints=None, investment=None, name='subsector', forecast=5, expand_market_prices=False)[source]

Agent group servicing a subset of the sectorial commodities.

12.2.4. PresetSector

class PresetSector(presets, interpolation_mode='linear', name='preset')[source]

Sector with outcomes fixed from the start.

classmethod factory(name, settings)[source]

Constructs a PresetSectors from input data.

interpolation_mode

Interpolation method

name

Name by which to identify a sector

next(mca_market)[source]

Advance sector by one time period.

presets

Market across time and space.

12.2.5. LegacySector

class LegacySector(name, old_sector, timeslices, commodities, commodity_price, static_trade, regions, time_framework, mode, excess, market_iterative, sectors_dir, output_dir)[source]
calibrated

Flag if the sector has gone through the calibration process.

commodities

Commodities for each sector, as well as global commodities.

commodity_price

Initial price of all the commodities.

dims

Order of the input and output dimensions.

excess

Allowed excess of capacity.

classmethod factory(name, settings, **kwargs)[source]

Creates class from settings named-tuple.

property global_commodities

List of all commodities used by the MCA.

inputs(consumption, prices, supply)[source]

Converts xarray to MUSE numpy input arrays.

static load_timeslices_and_aggregation(timeslices, sectors)[source]

Loads all sector timeslices and finds the finest one.

market_iterative

—–> TODO what’s this parameter?

mode

If ‘Calibration’, the sector runs in calibration mode

name

Name of the sector

next(market)[source]

Adapter between the old and the new.

old_sector

Legacy sector method to run the calculation

output_dir

Outputs directory.

outputs(consumption, prices, supply)[source]

Converts MUSE numpy outputs to xarray.

regions

Regions taking part in the simulation.

property sector_commodities

List of all commodities used by the Sector.

property sector_timeslices

List of all commodities used by the MCA.

sectors_dir

Sectors directory.

static_trade

Static trade needed for the conversion and supply sectors.

time_framework

Time framework of the complete simulation.

timeslices

Timeslices for sectors and mca.

12.2.6. Production

Various ways and means to compute production.

Production is the amount of commodities produced by an asset. However, depending on the context, it could be computed several ways. For instance, it can be obtained straight from the capacity of the asset. Or it can be obtained by matching for the same commodities with a set of assets.

Production methods can be registered via the @register_production production decorator. Registering a function makes the function accessible from MUSE’s input file. Production methods are not expected to modify their arguments. Furthermore they should conform the following signatures:

@register_production
def production(
    market: xr.Dataset, capacity: xr.DataArray, technologies: xr.Dataset, **kwargs
) -> xr.DataArray:
    pass
param market:

Market, including demand and prices.

param capacity:

The capacity of each asset within a market.

param technologies:

A dataset characterising the technologies of the same assets.

param **kwargs:

Any number of keyword arguments

returns:

A xr.DataArray with the amount produced for each good from each asset.

PRODUCTION_SIGNATURE

Production signature.

alias of Callable[[DataArray, DataArray, Dataset], DataArray]

demand_matched_production(market, capacity, technologies, costs='prices')[source]

Production from matching demand via annual lcoe.

factory(settings='maximum_production', **kwargs)[source]

Creates a production functor.

This function’s raison d’être is to convert the input from a TOML file into an actual functor usable within the model, i.e. it converts data into logic.

Parameters:
  • name – Registered production method to create. The name is resolved when the function returned by the factory is called. Hence, it could refer to a function yet to be registered when this factory method is called.

  • **kwargs – any keyword argument the production method accepts.

maximum_production(market, capacity, technologies)[source]

Production when running at full capacity.

Full capacity is limited by the utilization factor. For more details, see muse.quantities.maximum_production().

register_production(function=None)[source]

Decorator to register a function as a production method.

See also

muse.production

supply(market, capacity, technologies)[source]

Service current demand equally from all assets.

“Equally” means that equivalent technologies are used to the same percentage of their respective capacity.

12.2.7. Agent Interactions

Modes of interactions between agents.

Interactions between agents are modelled via two orthogonal concepts:

  • a net is a set of agents which interact in some way

  • an interaction proper is a function that takes a net and actually performs the interaction.

Hence, there are two registrators in this this module, register_interaction_net(), and register_agent_interaction(). The first registers functions that take the full set of agents as input and returns a sequence of nets. It is expected each net of the sequence will be applied the same interaction. The second registrator registers the interaction proper: it takes agents as arguments and returns nothing. It is expected to modify the agents in-place.

factory(inputs=None)[source]

Creates an interaction functor.

new_to_retro_net(agents, first_category='newcapa')[source]

Interactions between new and retrofit agents.

register_agent_interaction(function)[source]

Decorator to register an agent to agent(s) interaction function.

An agent interaction function takes at least two agents and makes them interact in some way.

An agent interaction function also takes as argument a sector object. This object should not be modified in any way. But it can be queried for parameters, if the specific agent interaction function requires it. This is most likely the same configuration object passed on to the interaction net function.

register_interaction_net(function)[source]

Decorator to register a function computing interaction nets.

An interaction net function takes as input the list of all agents and returns the list of all interactions, where an interaction is a list of at least two interacting agents.

An interactiont-net function also takes as argument a sector object. This object should not be modified in any way. But it can be queried for parameters, if the specific interaction-net function requires it.

transfer_assets(from_, to_)[source]

Transfer assets from first agent to second agent.

12.3. Agents and associated functionalities

Holds all building agents.

agents_factory(params_or_path, capacity, technologies, regions=None, year=None, **kwargs)[source]

Creates a list of agents for the chosen sector.

create_newcapa_agent(capacity, year, region, share, search_rules='all', interpolation='linear', merge_transform='new', quantity=0.3, housekeeping='noop', retrofit_present=True, **kwargs)[source]

Creates newcapa agent from muse primitives.

If there are no retrofit agents present in the sector, then the newcapa agent need to be initialised with the initial capacity of the sector.

create_retrofit_agent(technologies, capacity, share, year, region, interpolation='linear', decision='mean', **kwargs)[source]

Creates retrofit agent from muse primitives.

factory(existing_capacity_path=None, agent_parameters_path=None, technodata_path=None, technodata_timeslices_path=None, sector=None, sectors_directory=PosixPath('/home/docs/checkouts/readthedocs.org/user_builds/muse-os/checkouts/latest/docs/data'), baseyear=2010)[source]

Reads list of agents from standard MUSE input files.

class AbstractAgent(name='Agent', region='', assets=None, interpolation='linear', category=None, quantity=1)[source]

Base class for all agents.

assets

Current stock of technologies.

category

Attribute to classify different sets of agents.

filter_input(dataset, year=None, **kwargs)[source]

Filter inputs for usage in agent.

For instance, filters down to agent’s region, etc.

interpolation

Interpolation method.

name

Name associated with the agent

abstract next(technologies, market, demand, time_period=1)[source]

Iterates agent one turn.

The goal is to figure out from market variables which technologies to invest in and by how much.

quantity

Attribute to classify different agents share of the population

region

Region the agent operates in

tolerance = 1e-12

tolerance criteria for floating point comparisons.

uuid

A unique identifier for the agent.

class Agent(name='Agent', region='USA', assets=None, interpolation='linear', search_rules=None, objectives=None, decision=None, year=2010, maturity_threshhold=0, forecast=5, housekeeping=None, merge_transform=None, demand_threshhold=None, category=None, asset_threshhold=0.0001, quantity=1, **kwargs)[source]

Agent that is capable of computing a search-space and a cost metric.

This agent will not perform any investment itself.

_housekeeping

Transforms applied on the assets at the start of each iteration.

It could mean keeping the assets as are, or removing assets with no capacity in the current year and beyond, etc… It can be any function registered with register_initial_asset_transform().

add_investments(technologies, investments, current_year, time_period)[source]

Add new assets to the agent.

asset_housekeeping()[source]

Reduces memory footprint of assets.

Performs tasks such as:

  • remove empty assets

  • remove years prior to current

  • interpolate current year and forecasted year

asset_threshhold

Threshold below which assets are not added.

decision

Creates single decision objective from one or more objectives.

demand_threshhold

Threshold below which the demand share is zero.

This criteria avoids fulfilling demand for very small values. If None, then the criteria is not applied.

forecast

Number of years to look into the future for forecating purposed.

property forecast_year

Year to consider when forecasting.

maturity_threshhold

Market share threshold.

Threshold when and if filtering replacement technologies with respect to market share.

merge_transform

Transforms applied on the old and new assets.

It could mean using only the new assets, or merging old and new, etc… It can be any function registered with register_final_asset_transform().

next(technologies, market, demand, time_period=1)[source]

Iterates agent one turn.

The goal is to figure out from market variables which technologies to invest in and by how much.

This function will modify self.assets and increment self.year. Other attributes are left unchanged. Arguments to the function are never modified.

objectives

One or more objectives by which to decide next investments.

search_rules

Search rule(s) determining potential replacement technologies.

This is a string referring to a filter, or a sequence of strings referring to multiple filters, applied one after the other. Any function registered via muse.filters.register_filter can be used to filter the search space.

year

Current year.

The year is incremented by one every time next is called.

class InvestingAgent(*args, constraints=None, investment=None, **kwargs)[source]

Agent that performs investment for itself.

constraints

Creates a set of constraints limiting investment.

invest

Method to use when fulfilling demand from rated set of techs.

next(technologies, market, demand, time_period=1)[source]

Iterates agent one turn.

The goal is to figure out from market variables which technologies to invest in and by how much.

This function will modify self.assets and increment self.year. Other attributes are left unchanged. Arguments to the function are never modified.

12.3.1. Objectives

Valuation functions for replacement technologies.

Objectives are used to compare replacement technologies. They should correspond to a single well defined economic concept. Multiple objectives can later be combined via decision functions.

Objectives should be registered via the @register_objective decorator. This makes it possible to refer to them by name in agent input files, and nominally to set extra input parameters.

The factory() function creates a function that calls all objectives defined in its input argument and returns a dataset with each objective as a separate data array.

Objectives are not expected to modify their arguments. Furthermore they should conform the following signatures:

@register_objective
def comfort(
    agent: Agent,
    demand: xr.DataArray,
    search_space: xr.DataArray,
    technologies: xr.Dataset,
    market: xr.Dataset,
    **kwargs
) -> xr.DataArray:
    pass
param agent:

the agent relevant to the search space. The filters may need to query the agent for parameters, e.g. the current year, the interpolation method, the tolerance, etc.

param demand:

Demand to fulfill.

param search_space:

A boolean matrix represented as a xr.DataArray, listing replacement technologies for each asset.

param technologies:

A data set characterising the technologies from which the agent can draw assets.

param market:

Market variables, such as prices or current capacity and retirement profile.

param kwargs:

Extra input parameters. These parameters are expected to be set from the input file.

Warning

The standard agent csv file does not allow to set these parameters.

returns:

A dataArray with at least one dimension corresponding to replacement. Only the technologies in search_space.replacement should be present. Furthermore, if an asset dimension is present, then it should correspond to search_space.asset. Other dimensions can be present, as long as the subsequent decision function nows how to reduce them.

capacity_to_service_demand(agent, demand, search_space, technologies, market, *args, **kwargs)[source]

Minimum capacity required to fulfill the demand.

capital_costs(agent, demand, search_space, technologies, *args, **kwargs)[source]

Capital costs for input technologies.

The capital costs are computed as \(a * b^\alpha\), where \(a\) is “cap_par” from the Techno-data, \(b\) is the “scaling_size”, and \(\alpha\) is “cap_exp”. In other words, capital costs are constant across the simulation for each technology.

comfort(agent, demand, search_space, technologies, *args, **kwargs)[source]

Comfort value provided by technologies.

efficiency(agent, demand, search_space, technologies, *args, **kwargs)[source]

Efficiency of the technologies.

emission_cost(agent, demand, search_space, technologies, market, *args, **kwargs)[source]

Emission cost for each technology when fultfilling whole demand.

Given the demand share \(D\), the emissions per amount produced \(E\), and the prices per emittant \(P\), then emissions costs \(C\) are computed as:

\[C = \sum_s \left(\sum_cD\right)\left(\sum_cEP\right),\]

with \(s\) the timeslices and \(c\) the commodity.

equivalent_annual_cost(agent, demand, search_space, technologies, market, *args, **kwargs)[source]

Equivalent annual costs (or annualized cost) of a technology.

This is the cost that, if it were to occur equally in every year of the project lifetime, would give the same net present cost as the actual cash flow sequence associated with that component. The cost is computed using the annualized cost expression given by HOMER Energy.

Parameters:
  • agent – The agent of interest

  • search_space – The search space space for replacement technologies

  • technologies – All the technologies

  • market – The market parameters

Returns:

xr.DataArray with the EAC calculated for the relevant technologies

factory(settings='LCOE')[source]

Creates a function computing multiple objectives.

The input can be a single objective defined by its name alone. Or it can be a single objective defined by a dictionary which must include at least a “name” item, as well as any extra parameters to pass to the objective. Or it can be a sequence of objectives defined by name or by dictionary.

fixed_costs(agent, demand, search_space, technologies, market, *args, **kwargs)[source]

Fixed costs associated with a technology.

Given a factor \(\alpha\) and an exponent \(\beta\), the fixed costs \(F\) are computed from the capacity fulfilling the current demand \(C\) as:

\[F = \alpha * C^\beta\]

\(\alpha\) and \(\beta\) are “fix_par” and “fix_exp” in Techno-data, respectively.

fuel_consumption_cost(agent, demand, search_space, technologies, market, *args, **kwargs)[source]

Cost of fuels when fulfilling whole demand.

lifetime_levelized_cost_of_energy(agent, demand, search_space, technologies, market, *args, **kwargs)[source]

Levelized cost of energy (LCOE) of technologies over their lifetime.

It follows the simplified LCOE given by NREL.

Parameters:
  • agent – The agent of interest

  • search_space – The search space space for replacement technologies

  • technologies – All the technologies

  • market – The market parameters

Returns:

xr.DataArray with the LCOE calculated for the relevant technologies

net_present_value(agent, demand, search_space, technologies, market, *args, **kwargs)[source]

Net present value (NPV) of the relevant technologies.

The net present value of a Component is the present value of all the revenues that a Component earns over its lifetime minus all the costs of installing and operating it. Follows the definition of the `net present cost`_ given by HOMER Energy. Metrics are calculated .. _net present cost: .. https://www.homerenergy.com/products/pro/docs/3.11/net_present_cost.html

  • energy commodities INPUTS are related to fuel costs

  • environmental commodities OUTPUTS are related to environmental costs

  • material and service commodities INPUTS are related to consumable costs

  • fixed and variable costs are given as technodata inputs and depend on the installed capacity and production (non-environmental), respectively

  • capacity costs are given as technodata inputs and depend on the installed capacity

Note

Here, the installation year is always agent.forecast_year, since objectives compute the NPV for technologies to be installed in the current year. A more general NPV computation (which would then live in quantities.py) would have to refer to installation year of the technology.

Parameters:
  • agent – The agent of interest

  • search_space – The search space space for replacement technologies

  • technologies – All the technologies

  • market – The market parameters

Returns:

xr.DataArray with the NPV calculated for the relevant technologies

register_objective(function)[source]

Decorator to register a function as a objective.

Registers a function as a objective so that it can be applied easily when sorting technologies one against the other.

The input name is expected to be in lower_snake_case, since it ought to be a python function. CamelCase, lowerCamelCase, and kebab-case names are also registered.

12.3.2. Search Rules

Various search-space filters.

Search-space filters return a modified matrix of booleans, with dimension (asset, replacement), where asset refer to technologies currently managed by the agent, and replacement to all technologies the agent could consider, prior to filtering.

Filters should be registered using the decorator register_filter(). The registration makes it possible to call then from the agent by specifying the search_rule attribute. The search_rule attribute is string or list of strings specifying the filters to apply one after the other when considering the search space.

Filters are not expected to modify any of their arguments. They should all follow the same signature:

@register_filter
def search_space_filter(
    agent: Agent,
    search_space: xr.DataArray,
    technologies: xr.Dataset,
    market: xr.Dataset
) -> xr.DataArray:
    pass
param agent:

the agent relevant to the search space. The filters may need to query the agent for parameters, e.g. the current year, the interpolation method, the tolerance, etc.

param search_space:

the current search space.

param technologies:

A data set characterising the technologies from which the agent can draw assets.

param market:

Market variables, such as prices or current capacity and retirement profile.

returns:

A new search space with the same data-type as the input search-space, but with potentially different values.

In practice, an initial search space is created by calling a function with the signature given below, and registered with register_initializer(). The initializer function returns a search space which is passed on to a chain of filters, as done in the factory() function.

Functions creating initial search spaces should have the following signature:

@register_initializer
def search_space_initializer(
    agent: Agent,
    demand: xr.DataArray,
    technologies: xr.Dataset,
    market: xr.Dataset
) -> xr.DataArray:
    pass
param agent:

the agent relevant to the search space. The filters may need to query the agent for parameters, e.g. the current year, the interpolation method, the tolerance, etc.

param demand:

share of the demand per existing reference technology (e.g. assets).

param technologies:

A data set characterising the technologies from which the agent can draw assets.

param market:

Market variables, such as prices or current capacity and retirement profile.

returns:

An initial search space

compress(agent, search_space, technologies, market, **kwargs)[source]

Compress search space to include only potential technologies.

This operation reduces the size of the search space along the replacement dimension, such that are left only technologies that will be considered as replacement for at least by one asset. Unlike most filters, it does not change the data, but rather changes how the data is represented. In other words, this is mostly an optimization for later steps, to avoid unnecessary computations.

currently_existing_tech(agent, search_space, technologies, market)[source]

Only consider technologies that currently exist in the market.

This filter only allows technologies that exists in the market and have non- zero capacity in the current year. See currently_referenced_tech for a similar filter that does not check the capacity.

currently_referenced_tech(agent, search_space, technologies, market)[source]

Only consider technologies that are currently referenced in the market.

This filter will allow any technology that exists in the market, even if it currently sits at zero capacity (unlike currently_existing_tech which requires non-zero capacity in the current year).

factory(settings=None, separator='->')[source]

Creates filters from input TOML data.

The input data is standardized to a list of dictionaries where each dictionary contains at least one member, “name”.

The first dictionary specifies the initial function which creates the search space from the demand share, the market, and the dataset describing technologies in the sectors.

The next entries are applied in turn and transform the search space in some way. In other words the process is more or less:

search_space = initial_filter(
    agent, demand, technologies=technologies, market=market
)
for afilter in filters:
    search_space = afilter(
        agent, search_space, technologies=technologies, market=market
    )
return search_space

initial_filter is simply first filter given on input, if that filter is registered with register_initializer(). Otherwise, initialize_from_technologies() is automatically inserted.

identity(agent, search_space, *args, **kwargs)[source]

Returns search space as given.

initialize_from_technologies(agent, demand, technologies, *args, **kwargs)[source]

Initialize a search space from existing technologies.

maturity(agent, search_space, technologies, market, enduse_label='service', **kwargs)[source]

Only allows technologies that have achieve a given market share.

Specifically, the market share refers to the capacity for each end- use.

reduce_asset(agent, search_space, technologies, market, **kwargs)[source]

Reduce over assets.

register_filter(function)[source]

Decorator to register a function as a filter.

Registers a function as a filter so that it can be applied easily when constraining the technology search-space.

The name that the function is registered with defaults to the function name. However, it can also be specified explicitly as a keyword argument. In any case, it must be unique amongst all search-space filters.

register_initializer(function)[source]

Decorator to register a function as a search-space initializer.

same_enduse(agent, search_space, technologies, *args, enduse_label='service', **kwargs)[source]

Only allow for technologies with at least the same end-use.

same_fuels(agent, search_space, technologies, *args, **kwargs)[source]

Filters technologies with the same fuel type.

similar_technology(agent, search_space, technologies, *args, **kwargs)[source]

Filters technologies with the same type.

with_asset_technology(agent, search_space, technologies, market, **kwargs)[source]

Search space also contains its asset technology for each asset.

12.3.3. Decision Methods

Decision methods combining several objectives into ones.

Decisions methods create a single scalar from multiple objectives. To be available from the input, functions implementing decision methods should follow a specific signature:

@register_decision
def weighted_sum(objectives: Dataset, parameters: Any, **kwargs) -> DataArray:
    pass
param objectives:

An dataset where each array is a separate objective

param parameters:

parameters, such as weights, whether to minimize or maximize, the names of objectives to consider, etc.

param kwargs:

Extra input parameters. These parameters are expected to be set from the input file.

Warning

The standard agent csv file does not allow to set these parameters.

returns:

A data array with ranked replacement technologies.

epsilon_constraints(objectives, parameters, mask=None)[source]

Minimizes first objective subject to constraints on other objectives.

The parameters are a sequence of tuples (name, minimize, epsilon), where name is the name of the objective, minimize is True if minimizing and false if maximizing that objective, and epsilon is the constraint. The first objective is the one that will be minimized according to:

Given objectives \(O^{(i)}_t\), with \(i \in [|1, N|]\) and \(t\) the replacement technologies, this function computes the ranking with respect to \(t\):

\[\mathrm{ranking}_{O^{(i)}_t < \epsilon_i} O^{(0)}_t\]

The first tuple can be restricted to (name, minimize), since epsilon is ignored.

The result is the matrix \(O^{(0)}\) modified such minimizing over the replacement dimension value would take into account the constraints and the optimization direction (minimize or maximize). In other words, calling result.rank(‘replacement’) will yield the expected result.

factory(settings='mean')[source]

Creates a decision method based on the input settings.

lexical_comparison(objectives, parameters)[source]

Lexical comparison over the objectives.

Lexical comparison operates by binning the objectives into bins of width w_i = min_j(p_i o_i^j). Once binned, dimensions other than asset and technology are reduced by taking the max, e.g. the largest constraint. Finally, the objectives are ranked lexographically, in the order given by the parameters.

The result is an array of tuples which can subsequently be compared lexicographically.

mean(objectives, *args, **kwargs)[source]

Mean over objectives.

register_decision(function, name)[source]

Decorator to register a function as a decision.

Registers a function as a decision so that it can be applied easily when aggregating different objectives together.

retro_epsilon_constraints(objectives, parameters)[source]

Epsilon constraints where the current tech is included.

Modifies the parameters to the function such that the existing technologies are always competitive.

retro_lexical_comparison(objectives, parameters)[source]

Lexical comparison over the objectives.

Lexical comparison operates by binning the objectives into bins of width w_i = p_i o_i, where i are the current assets. Once binned, dimensions other than asset and replacement are reduced by taking the max, e.g. the largest constraint. Finally, the objectives are ranked lexographically, in the order given by the parameters.

The result is an array of tuples which can subsequently be compared lexicographically.

single_objective(objectives, parameters)[source]

Single objective decision method.

It only decides on minimization vs maximization and multiplies by a given factor. The input parameters can take the following forms:

  • Standard sequence [(objective, direction, factor)], in which case it must have only one element.

  • A single string: defaults to standard sequence [(string, 1, 1)]

  • A tuple (string, bool): defaults to standard sequence [(string, direction, 1)]

  • A tuple (string, bool, factor): defaults to standard sequence [(string, direction, factor)]

weighted_sum(objectives, parameters)[source]

Weighted sum over normalized objectives.

The objectives are each normalized to [0, 1] over the replacement dimension. Furthermore, the dimensions other than asset and replacement are reduced by taking the mean.

More specifically, the objective function is:

\[\sum_m c_m \frac{A_m - \min(A_m)}{\max(A_m) - \min(A_m)}\]

where sum runs over the different objectives, c_m is a scalar coefficient, A_m is a matrix with dimensions (existing tech, replacemnt tech). max(A) and min(A) return the largest and smallest component of the input matrix. If c_m is positive, then that particular objective is minimized, whereas if it is negative, that particular objective is maximized.

12.3.4. Investment Methods

Investment decision.

An investment determines which technologies to invest given a metric to determine preferred technologies, a corresponding search space of technologies, and the demand to fulfill.

Investments should be registered via the decorator register_investment. The registration makes it possible to call investments dynamically through compute_investment, by specifying the name of the investment. It is part of MUSE’s plugin platform.

Investments are not expected to modify any of their arguments. They should all have the following signature:

@register_investment
def investment(
    costs: xr.DataArray,
    search_space: xr.DataArray,
    technologies: xr.Dataset,
    constraints: List[Constraint],
    year: int,
    **kwargs
) -> xr.DataArray:
    pass
param costs:

specifies for each asset which replacement technology should be invested in preferentially. This should be an integer or floating point array with dimensions asset and replacement.

param search_space:

an asset by replacement matrix defining allowed and disallowed replacement technologies for each asset

param technologies:

a dataset containing all constant data characterizing the technologies.

param constraints:

a list of constraints as defined in constraints.

param year:

the current year.

returns:

A data array with dimensions asset and technology specifying the amount of newly invested capacity.

INVESTMENT_SIGNATURE

Investment signature.

alias of Callable[[DataArray, DataArray, Dataset, List[Dataset], Any], DataArray | Dataset]

cliff_retirement_profile(technical_life, current_year=0, protected=0, interpolation='linear', **kwargs)[source]

Cliff-like retirement profile from current year.

Computes the retirement profile of all technologies in technical_life. Assets with a technical life smaller than the input time-period should automatically be renewed.

Hence, if technical_life <= protected, then effectively, the technical life is rewritten as technical_life * n with n = int(protected // technical_life) + 1.

We could just return an array where each year is represented. Instead, to save memory, we return a compact view of the same where years where no change happens are removed.

Parameters:
  • technical_life – lifetimes for each technology

  • current_year – current year

  • protected – The technologies are assumed to be renewed between years current_year and current_year + protected

  • **kwargs – arguments by which to filter technical_life, if any.

Returns:

A boolean DataArray where each each element along the year dimension is true if the technology is still not retired for the given year.

register_investment(function)[source]

Decorator to register a function as an investment.

The output of the function can be a DataArray, with the invested capacity, or a Dataset. In this case, it must contain a DataArray named “capacity” and, optionally, a DataArray named “production”. Only the invested capacity DataArray is returned to the calling function.

12.3.5. Demand Share

Demand share computations.

The demand share splits a demand amongst agents. It is used within a sector to assign part of the input MCA demand to each agent.

Demand shares functions should be registered via the decorator register_demand_share.

Demand share functions are not expected to modify any of their arguments. They should all have the following signature:

@register_demand_share
def demand_share(
    agents: Sequence[AbstractAgent],
    market: xr.Dataset,
    technologies: xr.Dataset,
    **kwargs
) -> xr.DataArray:
    pass
param agents:

a sequence of agent relevant to the demand share procedure. The agent can be queried for parameters specific to the demand share procedure. For instance, :py:func`new_and_retro` will query the agents for the assets they own, the region they are contained with, their category (new or retrofit), etc…

param market:

Market variables, including prices, consumption and supply.

param technologies:

a dataset containing all constant data characterizing the technologies.

param kwargs:

Any number of keyword arguments that can parametrize how the demand is shared. These keyword arguments can be modified from the TOML file.

returns:

The unmet consumption. Unless indicated, all agents will compete for a the full demand. However, if there exists a coordinate “agent” of dimension “asset” giving the uuid of the agent, then agents will only service that par of the demand.

DEMAND_SHARE_SIGNATURE

Demand share signature.

alias of Callable[[Sequence[AbstractAgent], Dataset, Dataset, Any], DataArray]

new_and_retro(agents, market, technologies, production='maximum_production', current_year=None, forecast=5)[source]

Splits demand across new and retro agents.

The input demand is split amongst both new and retro agents. New agents get a share of the increase in demand for the forecast year, whereas retrofi agents are assigned a share of the demand that occurs from decommissioned assets.

Parameters:
  • agents – a list of all agents. This list should mainly be used to determine the type of an agent and the assets it owns. The agents will not be modified in any way.

  • market – the market for which to satisfy the demand. It should contain at-least consumption and supply. It may contain prices if that is of use to the production method. The consumption reflects the demand for the commodities produced by the current sector.

  • technologies – quantities describing the technologies.

Pseudo-code:

  1. the capacity is reduced over agents and expanded over timeslices (extensive quantity) and aggregated over agents. Generally:

    \[A_{a, s}^r = w_s\sum_i A_a^{r, i}\]

    with \(w_s\) a weight associated with each timeslice and determined via muse.timeslices.convert_timeslice().

  2. An intermediate quantity, the unmet demand \(U\) is defined from \(P[\mathcal{M}, \mathcal{A}]\), a function giving the production for a given market \(\mathcal{M}\), the associated consumption \(\mathcal{C}\), and aggregate assets \(\mathcal{A}\):

    \[U[\mathcal{M}, \mathcal{A}] = \max(\mathcal{C} - P[\mathcal{M}, \mathcal{A}], 0)\]

    where \(\max\) operates element-wise, and indices have been dropped for simplicity. The resulting expression has the same indices as the consumption \(\mathcal{C}_{c, s}^r\).

    \(P\) is any function registered with @register_production.

  3. the new demand \(N\) is defined as:

    \[N = \min\left( \mathcal{C}_{c, s}^r(y + \Delta y) - \mathcal{C}_{c, s}^r(y), U[\mathcal{M}^r(y + \Delta y), \mathcal{A}_{a, s}^r(y)] \right)\]
  4. the retrofit demand \(R\) is defined from the identity

    \[C_{c, s}^r(y + \Delta y) = P[\mathcal{M}^r(y+\Delta y), \mathcal{A}_{a, s}^r(y + \Delta y)] + N_{c, s}^r + R_{c, s}^r\]

    In other words, it is the share of the forecasted consumption that is serviced neither by the current assets still present in the forecast year, nor by the new agent.

  5. then each new agent gets a share of \(N\) proportional to it’s

    share of the production, \(P[\mathcal{A}_{a, s}^{r, i}(y)]\). Then the share of the demand for new agent \(i\) is:

    \[N_{c, s, t}^{i, r}(y) = N_{c, s}^r \frac{\sum_\iota P[\mathcal{A}_{s, t, \iota}^{r, i}(y)]} {\sum_{i, t, \iota}P[\mathcal{A}_{s, t, \iota}^{r, i}(y)]}\]
  6. similarly, each retrofit agent gets a share of \(N\) proportional to it’s

    share of the decommissioning demand, \(D^{r, i}_{t, c}\). Then the share of the demand for retrofit agent \(i\) is:

    \[R_{c, s, t}^{i, r}(y) = R_{c, s}^r \frac{\sum_\iota\mathcal{D}_{t, c, \iota}^{i, r}(y)} {\sum_{i, t, \iota}\mathcal{D}_{t, c, \iota}^{i, r}(y)}\]

Note that in the last two steps, the assets owned by the agent are aggregated over the installation year. The effect is that the demand serviced by agents is disaggregated over each technology, rather than not over each model of each technology (asset).

See also

indices, quantities, Agent investments, decommissioning_demand(), maximum_production()

register_demand_share(function)[source]

Decorator to register a function as a demand share calculation.

unmet_demand(market, capacity, technologies, production='maximum_production')[source]

Share of the demand that cannot be serviced by the existing assets.

\[U[\mathcal{M}, \mathcal{A}] = \max(\mathcal{C} - P[\mathcal{M}, \mathcal{A}], 0)\]

\(\max\) operates element-wise, and indices have been dropped for simplicity. The resulting expression has the same indices as the consumption \(\mathcal{C}_{c, s}^r\).

\(P\) is any function registered with @register_production.

unmet_forecasted_demand(agents, market, technologies, current_year=None, production='maximum_production', forecast=5)[source]

Forecast demand that cannot be serviced by non-decommissioned current assets.

12.3.6. Constraints:

Investment constraints.

Constraints on investements ensure that investements match some given criteria. For instance, the constraints could ensure that only so much of a new asset can be built every year.

Functions to compute constraints should be registered via the decorator register_constraints(). This registration step makes it possible for constraints to be declared in the TOML file.

Generally, LP solvers accept linear constraint defined as:

\[A x \leq b\]

with \(A\) a matrix, \(x\) the decision variables, and \(b\) a vector. However, these quantities are dimensionless. They do no have timeslices, assets, or replacement technologies, or any other dimensions that users have set-up in their model. The crux is to translates from MUSE’s data-structures to a consistent dimensionless format.

In MUSE, users can register constraints functions that return fully dimensional quantities. The matrix operator is split over the capacity decision variables and the production decision variables:

\[A_c .* x_c + A_p .* x_p \leq b\]

The operator \(.*\) means the standard elementwise multiplication of xarray, including automatic broadcasting (adding missing dimensions by repeating the smaller matrix along the missing dimension). Constraint functions return the three quantities \(A_c\), \(A_p\), and \(b\). These three quantities will often not have the same dimension. E.g. one might include timeslices where another might not. The transformation from \(A_c\), \(A_p\), \(b\) to \(A\) and \(b\) happens as described below.

  • \(b\) remains the same. It defines the rows of \(A\).

  • \(x_c\) and \(x_p\) are concatenated one on top of the other and define the columns of \(A\).

  • \(A\) is split into a left submatrix for capacities and a right submatrix for production, following the concatenation of \(x_c\) and \(x_p\)

  • Any dimension in \(A_c .* x_c\) (\(A_p .* x_p\)) that is also in \(b\) defines diagonal entries into the left (right) submatrix of \(A\).

  • Any dimension in \(A_c .* x_c\) (\(A_p .* x_b\)) and missing from \(b\) is reduce by summation over a row in the left (right) submatrix of \(A\). In other words, those dimension do become part of a standard tensor reduction or matrix multiplication.

There are two additional rules. However, they are likely to be the result of an inefficient definition of \(A_c\), \(A_p\) and \(b\).

  • Any dimension in \(A_c\) (\(A_b\)) that is neither in \(b\) nor in \(x_c\) (\(x_p\)) is reduced by summation before consideration for the elementwise multiplication. For instance, if \(d\) is such a dimension, present only in \(A_c\), then the problem becomes \((\sum_d A_c) .* x_c + A_p .* x_p \leq b\).

  • Any dimension missing from \(A_c .* x_c\) (\(A_p .* x_p\)) and present in \(b\) is added by repeating the resulting row in \(A\).

Constraints are registered using the decorator register_constraints(). The decorated functions must follow the following signature:

@register_constraints
def constraints(
    demand: xr.DataArray,
    assets: xr.Dataset,
    search_space: xr.DataArray,
    market: xr.Dataset,
    technologies: xr.Dataset,
    year: Optional[int] = None,
    **kwargs,
) -> Constraint:
    pass
demand:

The demand for the sectors products. In practice it is a demand share obtained in demand_share. It is a data-array with dimensions including asset, commodity, timeslice.

assets:

The capacity of the assets owned by the agent.

search_space:

A matrix asset vs replacement technology defining which replacement technologies will be considered for each existing asset.

market:

The market as obtained from the MCA.

technologies:

Technodata characterizing the competing technologies.

year:

current year.

**kwargs:

Any other parameter.

class ScipyAdapter(c, to_muse, bounds=(0, inf), A_ub=None, b_ub=None, A_eq=None, b_eq=None)[source]

Creates the input for the scipy solvers.

Example

Lets give a fist simple example. The constraint max_capacity_expansion() limits how much each capacity can be expanded in a given year.

>>> from muse import examples
>>> from muse.quantities import maximum_production
>>> from muse.timeslices import convert_timeslice
>>> from muse import constraints as cs
>>> res = examples.sector("residential", model="medium")
>>> market = examples.residential_market("medium")
>>> search = examples.search_space("residential", model="medium")
>>> assets = next(a.assets for a in res.agents if a.category == "retrofit")
>>> market_demand =  0.8 * maximum_production(
...     res.technologies.interp(year=2025),
...     convert_timeslice(
...         assets.capacity.sel(year=2025).groupby("technology").sum("asset"),
...         market.timeslice,
...     ),
... ).rename(technology="asset")
>>> costs = search * np.arange(np.prod(search.shape)).reshape(search.shape)
>>> constraint = cs.max_capacity_expansion(
...     market_demand, assets, search, market, res.technologies,
... )

The constraint acts over capacity decision variables only:

>>> assert constraint.production.data == np.array(0)
>>> assert len(constraint.production.dims) == 0

It is an upper bound for a straightforward sum over the capacities for a given technology. The matrix operator is simply the identity:

>>> assert constraint.capacity.data == np.array(1)
>>> assert len(constraint.capacity.dims) == 0

And the upperbound is expanded over the replacement technologies, but not over the assets. Hence the assets will be summed over in the final constraint:

>>> assert (constraint.b.data == np.array([50.0, 3.0, 3.0, 50.0 ])).all()
>>> assert set(constraint.b.dims) == {"replacement"}
>>> assert constraint.kind == cs.ConstraintKind.UPPER_BOUND

As shown above, it does not bind the production decision variables. Hence, production is zero. The matrix operator for the capacity is simply the identity. Hence it can be inputted as the dimensionless scalar 1. The upper bound is simply the maximum for replacement technology (and region, if that particular dimension exists in the problem).

The lp problem then becomes:

>>> technologies = res.technologies.interp(year=market.year.min() + 5)
>>> inputs = cs.ScipyAdapter.factory(
...     technologies, costs, market.timeslice, constraint
... )

The decision variables are always constrained between zero and infinity:

>>> assert inputs.bounds == (0, np.inf)

The problem is an upper-bound one. There are no equality constraints:

>>> assert inputs.A_eq is None
>>> assert inputs.b_eq is None

The upper bound matrix and vector, and the costs are consistent in their dimensions:

>>> assert inputs.c.ndim == 1
>>> assert inputs.b_ub.ndim == 1
>>> assert inputs.A_ub.ndim == 2
>>> assert inputs.b_ub.size == inputs.A_ub.shape[0]
>>> assert inputs.c.size == inputs.A_ub.shape[1]
>>> assert inputs.c.ndim == 1

In practice, lp_costs() helps us define the decision variables (and c). We can verify that the sizes are consistent:

>>> lpcosts = cs.lp_costs(technologies, costs, market.timeslice)
>>> capsize = lpcosts.capacity.size
>>> prodsize = lpcosts.production.size
>>> assert inputs.c.size == capsize + prodsize

The upper bound itself is over each replacement technology:

>>> assert inputs.b_ub.size == lpcosts.replacement.size

The production decision variables are not involved:

>>> from pytest import approx
>>> assert inputs.A_ub[:, capsize:] == approx(0)

The matrix for the capacity decision variables is a sum over assets for a given replacement technology. Hence, each row is constituted of zeros and ones and sums to the number of assets:

>>> assert inputs.A_ub[:, :capsize].sum(axis=1) == approx(lpcosts.asset.size)
>>> assert set(inputs.A_ub[:, :capsize].flatten()) == {0.0, 1.0}
demand(demand, assets, search_space, market, technologies, year=None, forecast=5, interpolation='linear')[source]

Constraints production to meet demand.

factory(settings=None)[source]

Creates a list of constraints from standard settings.

The standard settings can be a string naming the constraint, a dictionary including at least “name”, or a list of strings and dictionaries.

lp_constraint(constraint, lpcosts)[source]

Transforms the constraint to LP data.

The goal is to create from lpcosts.capacity, constraint.capacity, and constraint.b a 2d-matrix constraint vs decision variables.

  1. The dimensions of constraint.b are the constraint dimensions. They are

    renamed "c(xxx)".

  2. The dimensions of lpcosts are the decision-variable dimensions. They are

    renamed "d(xxx)".

  3. set(b.dims).intersection(lpcosts.xxx.dims) are diagonal

    in constraint dimensions and decision variables dimension, with xxx the capacity or the production

  4. set(constraint.xxx.dims) - set(lpcosts.xxx.dims) - set(b.dims) are reduced by

    summation, with xxx the capacity or the production

  5. set(lpcosts.xxx.dims) - set(constraint.xxx.dims) - set(b.dims) are added for

    expansion, with xxx the capacity or the production

See muse.constraints.lp_constraint_matrix() for a more detailed explanation of the transformations applied here.

lp_constraint_matrix(b, constraint, lpcosts)[source]

Transforms one constraint block into an lp matrix.

The goal is to create from lpcosts, constraint, and b a 2d-matrix of constraints vs decision variables.

  1. The dimensions of b are the constraint dimensions. They are renamed

    "c(xxx)".

  2. The dimensions of lpcosts are the decision-variable dimensions. They are

    renamed "d(xxx)".

  3. set(b.dims).intersection(lpcosts.dims) are diagonal

    in constraint dimensions and decision variables dimension

  4. set(constraint.dims) - set(lpcosts.dims) - set(b.dims) are reduced by

    summation

  5. set(lpcosts.dims) - set(constraint.dims) - set(b.dims) are added for

    expansion

  6. set(b.dims) - set(constraint.dims) - set(lpcosts.dims) are added for

    expansion. Such dimensions only make sense if they consist of one point.

The result is the constraint matrix, expanded, reduced and diagonalized for the conditions above.

Example:

Lets first setup a constraint and a cost matrix:

>>> from muse import examples
>>> from muse import constraints as cs
>>> res = examples.sector("residential", model="medium")
>>> technologies = res.technologies
>>> market = examples.residential_market("medium")
>>> search = examples.search_space("residential", model="medium")
>>> assets = next(a.assets for a in res.agents if a.category == "retrofit")
>>> demand = None # not used in max production
>>> constraint = cs.max_production(demand, assets, search, market,
...                                technologies) # noqa: E501
>>> lpcosts = cs.lp_costs(
...     (
...         technologies
...         .interp(year=market.year.min() + 5)
...         .drop_vars("year")
...         .sel(region=assets.region)
...     ),
...     costs=search * np.arange(np.prod(search.shape)).reshape(search.shape),
...     timeslices=market.timeslice,
... )

For a simple example, we can first check the case where b is scalar. The result ought to be a single row of a matrix, or a vector with only decision variables:

>>> from pytest import approx
>>> result = cs.lp_constraint_matrix(
...     xr.DataArray(1), constraint.capacity, lpcosts.capacity
... )
>>> assert result.values == approx(-1)
>>> assert set(result.dims) == {f"d({x})" for x in lpcosts.capacity.dims}
>>> result = cs.lp_constraint_matrix(
...     xr.DataArray(1), constraint.production, lpcosts.production
... )
>>> assert set(result.dims) == {f"d({x})" for x in lpcosts.production.dims}
>>> assert result.values == approx(1)

As expected, the capacity vector is 1, whereas the production vector is -1. These are the values the max_production() is set up to create.

Now, let’s check the case where b is the one from the max_production() constraint. In that case, all the dimensions should end up as constraint dimensions: the production for each timeslice, region, asset, and replacement technology should not outstrip the capacity assigned for the asset and replacement technology.

>>> result = cs.lp_constraint_matrix(
...     constraint.b, constraint.capacity, lpcosts.capacity
... )
>>> decision_dims = {f"d({x})" for x in lpcosts.capacity.dims}
>>> constraint_dims = {
...     f"c({x})"
...     for x in set(lpcosts.production.dims).union(constraint.b.dims)
... }
>>> assert set(result.dims) == decision_dims.union(constraint_dims)

The max_production() constraint on the production side is the identy matrix with a factor \(-1\). We can easily check this by stacking the decision and constraint dimensions in the result:

>>> result = cs.lp_constraint_matrix(
...     constraint.b, constraint.production, lpcosts.production
... )
>>> decision_dims = {f"d({x})" for x in lpcosts.production.dims}
>>> assert set(result.dims) == decision_dims.union(constraint_dims)
>>> stacked = result.stack(d=sorted(decision_dims), c=sorted(constraint_dims))
>>> assert stacked.shape[0] == stacked.shape[1]
>>> assert stacked.values == approx(np.eye(stacked.shape[0]))
lp_costs(technologies, costs, timeslices)[source]

Creates costs for solving with scipy’s LP solver.

Example

We can now construct example inputs to the function from the sample model. The costs will be a matrix where each assets has a candidate replacement technology.

>>> from muse import examples
>>> technologies = examples.technodata("residential", model="medium")
>>> search_space = examples.search_space("residential", model="medium")
>>> timeslices = examples.sector("residential", model="medium").timeslices
>>> costs = (
...     search_space
...     * np.arange(np.prod(search_space.shape)).reshape(search_space.shape)
... )

The function returns the LP vector split along capacity and production variables.

>>> from muse.constraints import lp_costs
>>> lpcosts = lp_costs(
...     technologies.sel(year=2020, region="R1"), costs, timeslices
... )
>>> assert "capacity" in lpcosts.data_vars
>>> assert "production" in lpcosts.data_vars

The capacity costs correspond exactly to the input costs:

>>> assert (costs == lpcosts.capacity).all()

The production is zero in this context. It does not enter the cost function of the LP problem:

>>> assert (lpcosts.production == 0).all()

They should correspond to a data-array with dimensions (asset, replacement) (and possibly region as well).

>>> lpcosts.capacity.dims
('asset', 'replacement')

The production costs are zero by default. However, the production expands over not only the dimensions of the capacity, but also the timeslice during which production occurs and the commodity produced.

>>> lpcosts.production.dims
('timeslice', 'asset', 'replacement', 'commodity')
max_capacity_expansion(demand, assets, search_space, market, technologies, year=None, forecast=None, interpolation='linear')[source]

Max-capacity addition, max-capacity growth, and capacity limits constraints.

Limits by how much the capacity of each technology owned by an agent can grow in a given year. This is a constraint on the agent’s ability to invest in a technology.

Let \(L_t^r(y)\) be the total capacity limit for a given year, technology, and region. \(G_t^r(y)\) is the maximum growth. And \(W_t^r(y)\) is the maximum additional capacity. \(y=y_0\) is the current year and \(y=y_1\) is the year marking the end of the investment period.

Let \(\mathcal{A}^{i, r}_{t, \iota}(y)\) be the current assets, before invesment, and let \(\Delta\mathcal{A}^{i,r}_t\) be the future investements. The the constraint on agent \(i\) are given as:

\[ \begin{align}\begin{aligned}L_t^r(y_0) - \sum_\iota \mathcal{A}^{i, r}_{t, \iota}(y_1) \geq \Delta\mathcal{A}^{i,r}_t\\(y_1 - y_0 + 1) G_t^r(y_0) \sum_\iota \mathcal{A}^{i, r}_{t, \iota}(y_0) - \sum_\iota \mathcal{A}^{i, r}_{t, \iota}(y_1) \geq \Delta\mathcal{A}^{i,r}_t\\(y_1 - y_0)W_t^r(y_0) \geq \Delta\mathcal{A}^{i,r}_t\end{aligned}\end{align} \]

The three constraints are combined into a single one which is returned as the maximum capacity expansion, \(\Gamma_t^{r, i}\). The maximum capacity expansion cannot impose negative investments: Maximum capacity addition:

\[\Gamma_t^{r, i} \geq 0\]
max_production(demand, assets, search_space, market, technologies, year=None)[source]

Constructs constraint between capacity and maximum production.

Constrains the production decision variable by the maximum production for a given capacity.

register_constraints(function)[source]

Registers a constraint with MUSE.

See muse.constraints.

search_space(demand, assets, search_space, market, technologies, year=None, forecast=5)[source]

Removes disabled technologies.

12.3.7. Initial and Final Asset Transforms

Pre and post hooks on agents.

asset_merge_factory(settings='new')[source]

Returns a function for merging new investments into assets.

Available merging functions should be registered with @register_final_asset_transform.

clean(agent, assets)[source]

Removes empty assets.

housekeeping_factory(settings='noop')[source]

Returns a function for performing initial housekeeping.

For instance, remove technologies with no capacity now or in the future. Available housekeeping functions should be registered with @register_initial_asset_transform.

merge_assets(old_assets, new_assets)[source]

Adds new assets to old along asset dimension.

New assets are assumed to be nonequivalent to any old_assets. Indeed, it is expected that the asset dimension does not have coordinates (i.e. it is a combination of coordinates, such as technology and installation year).

After merging the new assets, quantities are back-filled along the year dimension. Further missing values (i.e. future years the old_assets did not take into account) are set to zero.

new_assets_only(old_assets, new_assets)[source]

Returns newly invested assets and ignores old assets.

noop(agent, assets)[source]

Return assets as they are.

old_assets_only(old_assets, new_assets)[source]

Returns old assets and ignores newly invested assets.

register_final_asset_transform(function)[source]

Decorator to register a function to merge new investments into current assets.

The transform is applied a the very end of the agent iteration. It can be any function which takes as input the current set of assets, the new assets, and any number of keyword arguments. The function must return a “merge” of the two assets.

For instance, the new assets could completely replace the old assets (new_assets_only()), or they could be summed to the old assets (merge_assets()).

register_initial_asset_transform(function)[source]

Decorator to register a function for cleaning or transforming assets.

The transformation is applied at the start of each iteration. It any function which take an agent and assets as input and any number of keyword arguments, and returns the transformed assets. The agent should not be modified. It is only there to query the current year, the region, etc.

12.4. Reading the inputs

Ensemble of functions to read MUSE data.

read_settings(settings_file, path=None)[source]

Loads the input settings for any MUSE simulation.

Loads a MUSE settings file. This must be a TOML formatted file. Missing settings are loaded from the DEFAULT_SETTINGS. Custom pythom modules, if present, are loaded and checks are run to validate the settings and ensure that they are compatible with a MUSE simulation.

Arguments: settings_file: A string or a Path to the settings file

Returns:

A dictionary with the settings

Ensemble of functions to read MUSE data.

read_attribute_table(path)[source]

Read a standard MUSE csv file for price projections.

read_csv_agent_parameters(filename)[source]

Reads standard MUSE agent-declaration csv-files.

Returns a list of dictionaries, where each dictionary can be used to instantiate an agent in muse.agents.factories.factory().

read_csv_outputs(paths, columns='commodity', indices=('RegionName', 'ProcessName', 'Timeslice'), drop=('Unnamed: 0',))[source]

Read standard MUSE output files for consumption or supply.

read_csv_timeslices(path, **kwargs)[source]

Reads timeslice information from input.

read_global_commodities(path)[source]

Reads commodities information from input.

read_initial_assets(filename)[source]

Reads and formats data about initial capacity into a dataframe.

read_initial_market(projections, base_year_import=None, base_year_export=None, timeslices=None)[source]

Read projections, import and export csv files.

read_io_technodata(filename)[source]

Reads process inputs or outputs.

There are four axes: (technology, region, year, commodity)

read_macro_drivers(path)[source]

Reads a standard MUSE csv file for macro drivers.

read_regression_parameters(path)[source]

Reads the regression parameters from a standard MUSE csv file.

read_technodictionary(filename)[source]

Reads and formats technodata into a dataset.

There are three axes: technologies, regions, and year.

read_technologies(technodata_path_or_sector=None, technodata_timeslices_path=None, comm_out_path=None, comm_in_path=None, commodities=None, sectors_directory=PosixPath('/home/docs/checkouts/readthedocs.org/user_builds/muse-os/checkouts/latest/docs/data'))[source]

Reads data characterising technologies from files.

Parameters:
  • technodata_path_or_sector – If comm_out_path and comm_in_path are not given, then this argument refers to the name of the sector. The three paths are then determined using standard locations and name. Specifically, thechnodata looks for a “technodataSECTORNAME.csv” file in the standard location for that sector. However, if comm_out_path and comm_in_path are given, then this should be the path to the the technodata file.

  • technodata_timeslices_path – This argument refers to the TechnodataTimeslices file which specifies the utilization factor per timeslice for the specified technology.

  • comm_out_path – If given, then refers to the path of the file specifying output commmodities. If not given, then defaults to “commOUTtechnodataSECTORNAME.csv” in the relevant sector directory.

  • comm_in_path – If given, then refers to the path of the file specifying input commmodities. If not given, then defaults to “commINtechnodataSECTORNAME.csv” in the relevant sector directory.

  • commodities – Optional. If commodities is given, it should point to a global commodities file, or a dataset akin to reading such a file with read_global_commodities. In either case, the information pertaining to commodities will be added to the technologies dataset.

  • sectors_directory – Optional. If paths_or_sector is a string indicating the name of the sector, then this is a path to a directory where standard input files are contained.

Returns:

A dataset with all the characteristics of the technologies.

read_timeslice_shares(path=PosixPath('/home/docs/checkouts/readthedocs.org/user_builds/muse-os/checkouts/latest/docs/data'), sector=None, timeslice='Timeslices{sector}.csv')[source]

Reads sliceshare information into a xr.Dataset.

Additionally, this function will try and recover the timeslice multi- index from a import file “Timeslices{sector}.csv” in the same directory as the timeslice shares. Pass None if this behaviour is not required.

SETTINGS_CHECKS = {'check_budget_parameters': <function check_budget_parameters>, 'check_foresight': <function check_foresight>, 'check_global_data_files': <function check_global_data_files>, 'check_interpolation_mode': <function check_interpolation_mode>, 'check_iteration_control': <function check_iteration_control>, 'check_log_level': <function check_log_level>, 'check_sectors_files': <function check_sectors_files>, 'check_time_slices': <function check_time_slices>}

Dictionary of settings checks.

SETTINGS_CHECKS_SIGNATURE

settings checks signature.

alias of Callable[[dict], None]

register_settings_check(function)[source]

Decorator to register a function as a settings check.

Registers a function as a settings check so that it can be applied easily when validating the MUSE input settings.

There is no restriction on the function name, although is should be in lower_snake_case, as it is a python function.

12.5. Writing Outputs

12.5.1. Sinks

Sinks where output quantities can be stored.

Sinks take as argument a DataArray and store it somewhere. Additionally they take a dictionary as argument. This dictionary will always contains the items (‘quantity’, ‘sector’, ‘year’) referring to the name of the quantity, the name of the calling sector, the current year. They may contain additional parameters which depend on the actual sink, such as ‘filename’.

Optionally, a description of the storage (filename, etc) can be returned.

The signature of a sink is:

@register_output_sink(name="netcfd")
def to_netcfd(quantity: DataArray, config: Mapping) -> Optional[Text]:
    pass
exception FiniteResourceException[source]

Raised when a finite resource is exceeded.

OUTPUT_SINKS = {'Aggregate': <class 'muse.outputs.sinks.YearlyAggregate'>, 'Csv': <function to_csv>, 'Excel': <function to_excel>, 'FiniteResourceLogger': <function finite_resource_logger>, 'Nc': <function to_netcdf>, 'Netcdf': <function to_netcdf>, 'ToCsv': <function to_csv>, 'ToExcel': <function to_excel>, 'ToNetcdf': <function to_netcdf>, 'Xlsx': <function to_excel>, 'YearlyAggregate': <class 'muse.outputs.sinks.YearlyAggregate'>, 'Yearlyaggregate': <class 'muse.outputs.sinks.YearlyAggregate'>, 'aggregate': <class 'muse.outputs.sinks.YearlyAggregate'>, 'csv': <function to_csv>, 'excel': <function to_excel>, 'finite-resource-logger': <function finite_resource_logger>, 'finiteResourceLogger': <function finite_resource_logger>, 'finite_resource_logger': <function finite_resource_logger>, 'finiteresourcelogger': <function finite_resource_logger>, 'nc': <function to_netcdf>, 'netcdf': <function to_netcdf>, 'to-csv': <function to_csv>, 'to-excel': <function to_excel>, 'to-netcdf': <function to_netcdf>, 'toCsv': <function to_csv>, 'toExcel': <function to_excel>, 'toNetcdf': <function to_netcdf>, 'to_csv': <function to_csv>, 'to_excel': <function to_excel>, 'to_netcdf': <function to_netcdf>, 'tocsv': <function to_csv>, 'toexcel': <function to_excel>, 'tonetcdf': <function to_netcdf>, 'xlsx': <function to_excel>}

Stores a quantity somewhere.

OUTPUT_SINK_SIGNATURE

Signature of functions used to save quantities.

alias of Callable[[DataArray | DataFrame, int, Any], str | None]

class YearlyAggregate(final_sink=None, sector='', axis='year', **kwargs)[source]

Incrementally aggregates data from year to year.

register_output_sink(function=None)[source]

Registers a function to save quantities.

sink_to_file(suffix)[source]

Simplifies sinks to files.

The decorator takes care of figuring out the path to the file, as well as trims the configuration dictionary to include only parameters for the sink itself. The decorated function returns the path to the output file.

standardize_quantity(function)[source]

Helps standardize how the quantities are specified.

This decorator adds three keyword arguments to an input function:

The three functions are applied in the order given, assuming an input is specified.

to_csv(quantity, filename, **params)[source]

Saves data array to csv format, using pandas.to_csv.

Parameters:
  • quantity – The data to be saved

  • filename – File to which the data should be saved

  • params – A configuration dictionary accepting any argument to pandas.to_csv

to_excel(quantity, filename, **params)[source]

Saves data array to csv format, using pandas.to_excel.

Parameters:
  • quantity – The data to be saved

  • filename – File to which the data should be saved

  • params – A configuration dictionary accepting any argument to pandas.to_excel

to_netcdf(quantity, filename, **params)[source]

Saves data array to csv format, using xarray.to_netcdf.

Parameters:
  • quantity – The data to be saved

  • filename – File to which the data should be saved

  • params – A configuration dictionary accepting any argument to xarray.to_netcdf

12.5.2. Sectorial Outputs

Output quantities.

Functions that compute sectorial quantities for post-simulation analysis should all follow the same signature:

@register_output_quantity
def quantity(
    capacity: xr.DataArray,
    market: xr.Dataset,
    technologies: xr.Dataset
) -> Union[xr.DataArray, DataFrame]:
    pass

They take as input the current capacity profile, aggregated across a sectoar, a dataset containing market-related quantities, and a dataset characterizing the technologies in the market. It returns a single xr.DataArray object.

The function should never modify it’s arguments.

OUTPUTS_PARAMETERS

Acceptable Datastructures for outputs parameters

alias of str | Mapping

OUTPUT_QUANTITIES = {'Capacity': <function capacity>, 'Consumption': <function consumption>, 'Costs': <function costs>, 'Supply': <function supply>, 'capacity': <function capacity>, 'consumption': <function consumption>, 'costs': <function costs>, 'supply': <function supply>}

Quantity for post-simulation analysis.

OUTPUT_QUANTITY_SIGNATURE

Signature of functions computing quantities for later analysis.

alias of Callable[[Dataset, DataArray, Dataset, Any], DataFrame | DataArray]

capacity(market, capacity, technologies, rounding=4)[source]

Current capacity.

consumption(market, capacity, technologies, sum_over=None, drop=None, rounding=4)[source]

Current consumption.

costs(market, capacity, technologies, sum_over=None, drop=None, rounding=4)[source]

Current costs.

factory(*parameters, sector_name='default')[source]

Creates outputs functions for post-mortem analysis.

Each parameter is a dictionary containing the following:

  • quantity (mandatory): name of the quantity to output. Mandatory.

  • sink (optional): name of the storage procedure, e.g. the file format or database format. When it cannot be guessed from filename, it defaults to “csv”.

  • filename (optional): path to a directory or a file where to store the quantity. In the latter case, if sink is not given, it will be determined from the file extension. The filename can incorporate markers. By default, it is “{default_output_dir}/{sector}{year}{quantity}{suffix}”.

  • any other parameter relevant to the sink, e.g. pandas.to_csv keyword arguments.

For simplicity, it is also possible to given lone strings as input. They default to {‘quantity’: string} (and the sink will default to “csv”).

register_output_quantity(function=None)[source]

Registers a function to compute an output quantity.

supply(market, capacity, technologies, sum_over=None, drop=None, rounding=4)[source]

Current supply.

12.6. Quantities

Collection of functions to compute model quantities.

This module is meant to collect functions computing quantities of interest to the model, e.g. lcoe, maximum production for a given capacity, etc, especially where these functions are used in different areas of the model.

annual_levelized_cost_of_energy(prices, technologies, interpolation='linear', fill_value='extrapolate', **filters)[source]

Undiscounted levelized cost of energy (LCOE) of technologies on each given year.

It mostly follows the simplified LCOE given by NREL. In the argument description, we use the following:

  • [h]: hour

  • [y]: year

  • [$]: unit of currency

  • [E]: unit of energy

  • [1]: dimensionless

Parameters:
  • prices – [$/(Eh)] the price of all commodities, including consumables and fuels. This dataarray contains at least timeslice and commodity dimensions.

  • technologies

    Describe the technologies, with at least the following parameters:

    • cap_par: [$/E] overnight capital cost

    • interest_rate: [1]

    • fix_par: [$/(Eh)] fixed costs of operation and maintenance costs

    • var_par: [$/(Eh)] variable costs of operation and maintenance costs

    • fixed_inputs: [1] == [(Eh)/(Eh)] ratio indicating the amount of commodity

      consumed per units of energy created.

    • fixed_outputs: [1] == [(Eh)/(Eh)] ration indicating the amount of

      environmental pollutants produced per units of energy created.

  • interpolation – interpolation method.

  • fill_value – Fill value for values outside the extrapolation range.

  • **filters – Anything by which prices can be filtered.

Returns:

The lifetime LCOE in [$/(Eh)] for each technology at each timeslice.

capacity_in_use(production, technologies, max_dim='commodity', **filters)[source]

Capacity-in-use for each asset, given production.

Conceptually, this operation is the inverse of production.

Parameters:
  • production – Production from each technology of interest.

  • technologies – xr.Dataset describing the features of the technologies of interests. It should contain fixed_outputs and utilization_factor. It’s shape is matched to capacity using muse.utilities.broadcast_techs.

  • max_dim – reduces the given dimensions using max. Defaults to “commodity”. If None, then no reduction is performed.

  • filters – keyword arguments are used to filter down the capacity and technologies. Filters not relevant to the quantities of interest, i.e. filters that are not a dimension of capacity or technologies, are silently ignored.

Returns:

Capacity-in-use for each technology, whittled down by the filters.

consumption(technologies, production, prices=None, **kwargs)[source]

Commodity consumption when fulfilling the whole production.

Currently, the consumption is implemented for commodity_max == +infinity. If prices are not given, then flexible consumption is not considered.

costed_production(demand, costs, capacity, technologies, with_minimum_service=True)[source]

Computes production from ranked assets. The assets are ranked according to their cost. The asset with least cost are allowed to service the demand first, up to the maximum production. By default, the minimum service is applied first.

decommissioning_demand(technologies, capacity, year=None)[source]

Computes demand from process decommissioning.

If year is not given, it defaults to all years in capacity. If there are more than two years, then decommissioning is with respect to first (or minimum) year.

Let \(M_t^r(y)\) be the retrofit demand, \(^{(s)}\mathcal{D}_t^r(y)\) be the decommissioning demand at the level of the sector, and \(A^r_{t, \iota}(y)\) be the assets owned by the agent. Then, the decommissioning demand for agent \(i\) is :

\[\mathcal{D}^{r, i}_{t, c}(y) = \sum_\iota \alpha_{t, \iota}^r \beta_{t, \iota, c}^r \left(A^{i, r}_{t, \iota}(y) - A^{i, r}_{t, \iota, c}(y + 1) \right)\]

given the utilization factor \(\alpha_{t, \iota}\) and the fixed output factor \(\beta_{t, \iota, c}\).

Furthermore, decommissioning demand is non-zero only for end-use commodities.

ncsearch-nohlsearch).. SeeAlso:

indices, quantities, maximum_production() is_enduse()

demand_matched_production(demand, prices, capacity, technologies, **filters)[source]

Production matching the input demand.

Parameters:
  • demand – demand to match.

  • prices – price from which to compute the annual levelized cost of energy.

  • capacity – capacity from which to obtain the maximum production constraints.

  • **filters – keyword arguments with which to filter the input datasets and data arrays., e.g. region, or year.

emission(production, fixed_outputs)[source]

Computes emission from current products.

Emissions are computed as sum(product) * fixed_outputs.

Parameters:
  • production – Produced goods. Only those with non-environmental products are used when computing emissions.

  • fixed_outputs – factor relating total production to emissions. For convenience, this can also be a technologies dataset containing fixed_output.

Returns:

A data array containing emissions (and only emissions).

gross_margin(technologies, capacity, prices)[source]

The percentage of revenue after direct expenses have been subtracted .. _reference: https://www.investopedia.com/terms/g/grossmargin.asp We first calculate the revenues, which depend on prices We then deduct the direct expenses - energy commodities INPUTS are related to fuel costs - environmental commodities OUTPUTS are related to environmental costs - variable costs is given as technodata inputs - non-environmental commodities OUTPUTS are related to revenues

maximum_production(technologies, capacity, **filters)[source]

Production for a given capacity.

Given a capacity \(\mathcal{A}_{t, \iota}^r\), the utilization factor \(\alpha^r_{t, \iota}\) and the the fixed outputs of each technology \(\beta^r_{t, \iota, c}\), then the result production is:

\[P_{t, \iota}^r = \alpha^r_{t, \iota}\beta^r_{t, \iota, c}\mathcal{A}_{t, \iota}^r\]

The dimensions above are only indicative. The function should work with many different input values, e.g. with capacities expanded over time-slices \(t\) or agents \(i\).

Parameters:
  • capacity – Capacity of each technology of interest. In practice, the capacity can refer to asset capacity, the max capacity, or the capacity-in-use.

  • technologies – xr.Dataset describing the features of the technologies of interests. It should contain fixed_outputs and utilization_factor. It’s shape is matched to capacity using muse.utilities.broadcast_techs.

  • filters – keyword arguments are used to filter down the capacity and technologies. Filters not relevant to the quantities of interest, i.e. filters that are not a dimension of capacity or technologies, are silently ignored.

Returns:

capacity * fixed_outputs * utilization_factor, whittled down according to the filters and the set of technologies in capacity.

supply(capacity, demand, technologies, interpolation='linear', production_method=None)[source]

Production and emission for a given capacity servicing a given demand.

Supply includes two components, end-uses outputs and environmental pollutants. The former consists of the demand that the current capacity is capable of servicing. Where there is excess capacity, then service is assigned to each asset a share of the maximum production (e.g. utilization across similar assets is the same in percentage). Then, environmental pollutants are computing as a function of commodity outputs.

Parameters:
  • capacity – number/quantity of assets that can service the demand

  • demand – amount of each end-use required. The supply of each process will not exceed it’s share of the demand.

  • technologies – factors bindings the capacity of an asset with its production of commodities and environmental pollutants.

Returns:

A data array where the commodity dimension only contains actual outputs (i.e. no input commodities).

supply_cost(production, lcoe, asset_dim='asset')[source]

Supply cost given production and the levelized cost of energy.

In practice, the supply cost is the weighted average LCOE over assets (asset_dim), where the weights are the production.

Parameters:
  • production – Amount of goods produced. In practice, production can be obtained from the capacity for each asset via the method muse.quantities.production.

  • lcoe – Levelized cost of energy for each good produced. In practice, it can be obtained from market prices via muse.quantities.annual_levelized_cost_of_energy or muse.quantities.lifetime_levelized_cost_of_energy.

  • asset_dim – Name of the dimension(s) holding assets, processes or technologies.

12.7. Demand Matching Algorithm

Collection of demand-matching algorithms.

At it’s simplest, the demand matching algorithm solves the following problem,

  • given a demand for a commodity \(D_d\), with \(d\in\mathcal{D}\)

  • given processes to supply these commodities, with an associated cost per process, \(C_{d, i}\), with \(i\in\mathcal{I}\)

Match demand and supply while minimizing the associated cost.

\[ \begin{align}\begin{aligned}\min_{X} \sum_{d, i} C_{d,i} X_{d, i}\\X_{d, i} \geq 0\\\sum_o X_o \geq D_d\end{aligned}\end{align} \]

The basic algorithm proceeds as follows:

  1. sort all costs \(C_{d, i}\) across both \(d\) and \(i\)

  2. for each cost \(c_0\) in order:

    1. find the set of indices \(\mathcal{C}\subseteq\mathcal{D}\cup\mathcal{I}\) for which

      \[\forall (d, i) \in \mathcal{C}\quad C_{d, i} == c_0\]
    2. determine the partial result for the current cost

      \[\forall (d, i) \in \mathcal{C}\quad X_{d, i} = \frac{D_d}{|i\in\mathcal{C}|}\]

      Where \(|i\in\mathcal{C}|\) indicates the number of indices \(i\) in \(\mathcal{C}\).

However, in practice, the problem to solve often contains constraints, e.g. a constraint on production \(\sum_d X_{d, i} \leq M_i\). The algorithms in this module try and solve these constrained problems one way or another.

demand_matching(demand, cost, *constraints, protected_dims=None)[source]

Demand matching over heterogeneous dimensions.

This algorithm enables demand matching while enforcing constraints on how much an asset can produce. Any set of dimensions can be matched. The algorithm is general with respect to the dimensions in demand and cost. It also enforces constraints over sets of indices.

\[ \begin{align}\begin{aligned}\min_{X} \sum_{d, i} C_{d, i} X_{d, i}\\X_{d, i} \geq 0\\\sum_i X_{d, i} \geq D_d\\M_{(d, i) \in \mathcal{R}^{(\alpha)}}^{(\alpha)} \geq \sum_{(d, i)\notin\mathcal{R}^{(\alpha)}} X_{d, i}\end{aligned}\end{align} \]

Where \(\alpha\) is an index running over constraints, \(\mathcal{R}^{(\alpha)}\subseteq\mathcal{D}\cup\mathcal{I}\) is a subset of indices.

The algorithm proceeds as described in muse.demand_matching. However, an extra step is added to ensure that the solutions falls within the convex-hull formed by the constraints. This projects the current solution onto the constraint. Hence, the solution will depend on the order in which the constraints are given.

  1. sort all costs \(C_{d, m}\) across both \(d\) and \(m\)

  2. for each cost \(c_0\) in order:

    1. find the set of indices \(\mathcal{C}\)

      \[ \begin{align}\begin{aligned}\mathcal{C}\subseteq\mathcal{D}\cup\mathcal{I}\\\forall (d, i) \in \mathcal{C}\quad C_{d, i} == c_0\end{aligned}\end{align} \]
    2. determine an interim partial result for the current cost

      \[\forall (d, i) \in \mathcal{C}\quad \delta X_{d, i} = \frac{1}{|i\in\mathcal{C}|}\left( D_d - \sum_{j \in \mathcal{I}} X_{d, j}\right)\]

      Where \(|i\in\mathcal{C}|\) indicates the number of \(i\) indices in \(\mathcal{C}\). The expression in the parenthesis is the currently unserviced demand.

    3. Loop over each constraint \(\alpha\). Below we drop the index \(\alpha\) over constraints for simplicity.

      1. Determine the excess over the constraint:

        \[E_{(d, i) \in \mathcal{R}} = \max\left\{ 0, \sum_{(d, i)\notin\mathcal{R}}\left( X_{d, i} + \delta X_{d, i} \right) - M_{(d, i) \in \mathcal{R}} \right\}\]
      2. Correct \(\delta X\) as follows:

        \[ \begin{align}\begin{aligned}\forall (d, i) \in \mathcal{C}\cap\mathcal{R}\quad \delta X\prime_{d, i} = E_{(d, i)} \frac{\delta X_{(d, i)}}{ \sum_{(e, j)\in \mathcal{C}\cap\mathcal{R}} \delta X_{(e,j)} }\\ \forall (d, i) \notin \mathcal{R}, (d, i)\in\mathcal{C} \quad \delta X\prime_{d, i} = 0\end{aligned}\end{align} \]
      3. Set \(\delta X = \max(0, \delta X - \delta X\prime)\)

A more complex problem would see independent dimensions for each quantity. In that, case we can reduce to the original problem as shown here

\[ \begin{align}\begin{aligned}C_{d, i, c} = \min_cC\prime_{d, i, c}\\D_d = \sum_{d\prime} D\prime_{d, d\prime}\\M_r = \sum_m M\prime_{r, m}\\X_{d, d\prime, i, m, c} = \left(C\prime_{d, i, c} == C_{d, i}\right) \frac{M\prime_{r, m}}{M_r} \frac{D\prime_{d, d\prime}}{D_d} X_{d, i}\end{aligned}\end{align} \]

A dimension could be shared by all quantities, in which case each point along that dimension is treated as independent.

Similarly, if a dimension is shared only by the demand and a constraint but not by the cost, then the problem can be reduced a set of problems independent along that direction.

Parameters:
  • demand – Demand to match with production. It should have the same physical units as max_production.

  • cost – Cost to minimize while fulfilling the demand.

  • *constraints – each item is a separate constraint \(M_r\).

Returns:

An array with the joint dimensionality of max_production, cost, and demand, containing the supply that fulfills the demand. The units of this supply are the same as demand and max_production.

12.8. Miscellaneous

12.8.1. Timeslices

Timeslice utility functions.

aggregate_transforms(settings=None, timeslice=None)[source]

Creates dictionary of transforms for aggregate levels.

The transforms are used to create the projectors towards the finest timeslice.

Parameters:
  • timeslice – a DataArray with the timeslice dimension.

  • settings – A dictionary mapping the name of an aggregate with the values it aggregates, or a string that toml will parse as such. If not given, only the unit transforms are returned.

Returns:

A dictionary of transforms for each possible slice to it’s corresponding finest timeslices.

Example

>>> toml = """
...     [timeslices]
...     spring.weekday = 5
...     spring.weekend = 2
...     autumn.weekday = 5
...     autumn.weekend = 2
...     winter.weekday = 5
...     winter.weekend = 2
...     summer.weekday = 5
...     summer.weekend = 2
...
...     [timeslices.aggregates]
...     spautumn = ["spring", "autumn"]
...     week = ["weekday", "weekend"]
... """
>>> from muse.timeslices import reference_timeslice, aggregate_transforms
>>> ref = reference_timeslice(toml)
>>> transforms = aggregate_transforms(toml, ref)
>>> transforms[("spring", "weekend")]
array([0, 1, 0, 0, 0, 0, 0, 0])
>>> transforms[("spautumn", "weekday")]
array([1, 0, 1, 0, 0, 0, 0, 0])
>>> transforms[("autumn", "week")].T
array([0, 0, 1, 1, 0, 0, 0, 0])
>>> transforms[("spautumn", "week")].T
array([1, 1, 1, 1, 0, 0, 0, 0])
convert_timeslice(x, ts, quantity=QuantityType.EXTENSIVE, finest=None, transforms=None)[source]

Adjusts the timeslice of x to match that of ts.

The conversion can be done in on of two ways, depending on whether the quantity is extensive or intensive. See QuantityType.

Example

Lets define three timeslices from finest, to fine, to rough:

>>> toml = """
...     ["timeslices"]
...     winter.weekday.day = 5
...     winter.weekday.night = 5
...     winter.weekend.day = 2
...     winter.weekend.night = 2
...     summer.weekday.day = 5
...     summer.weekday.night = 5
...     summer.weekend.day = 2
...     summer.weekend.night = 2
...     level_names = ["semester", "week", "day"]
...     aggregates.allday = ["day", "night"]
...     aggregates.allweek = ["weekend", "weekday"]
...     aggregates.allyear = ["winter", "summer"]
... """
>>> from muse.timeslices import setup_module
>>> from muse.readers import read_timeslices
>>> setup_module(toml)
>>> finest_ts = read_timeslices()
>>> fine_ts = read_timeslices(dict(week=["allweek"]))
>>> rough_ts = read_timeslices(dict(semester=["allyear"], day=["allday"]))

Lets also define to other data-arrays to demonstrate how we can play with dimensions:

>>> from numpy import array
>>> x = DataArray(
...     [5, 2, 3],
...     coords={'a': array([1, 2, 3], dtype="int64")},
...     dims='a'
... )
>>> y = DataArray([1, 1, 2], coords={'b': ["d", "e", "f"]}, dims='b')

We can now easily convert arrays with different dimensions. First, lets check conversion from an array with no timeslices:

>>> from xarray import ones_like
>>> from muse.timeslices import convert_timeslice, QuantityType
>>> z = convert_timeslice(x, finest_ts, QuantityType.EXTENSIVE)
>>> z.round(6)
<xarray.DataArray (timeslice: 8, a: 3)>
array([[0.892857, 0.357143, 0.535714],
       [0.892857, 0.357143, 0.535714],
       [0.357143, 0.142857, 0.214286],
       [0.357143, 0.142857, 0.214286],
       [0.892857, 0.357143, 0.535714],
       [0.892857, 0.357143, 0.535714],
       [0.357143, 0.142857, 0.214286],
       [0.357143, 0.142857, 0.214286]])
Coordinates:
  * timeslice  (timeslice) MultiIndex
  - semester   (timeslice) object 'winter' 'winter' ... 'summer' 'summer'
  - week       (timeslice) object 'weekday' 'weekday' ... 'weekend' 'weekend'
  - day        (timeslice) object 'day' 'night' 'day' ... 'night' 'day' 'night'
  * a          (a) int64 1 2 3
>>> z.sum("timeslice")
<xarray.DataArray (a: 3)>
array([5., 2., 3.])
Coordinates:
  * a        (a) int64 1 2 3

As expected, the sum over timeslices recovers the original array.

In the case of an intensive quantity without a timeslice dimension, the operation does not do anything:

>>> convert_timeslice([1, 2], rough_ts, QuantityType.INTENSIVE)
[1, 2]

More interesting is the conversion between different timeslices:

>>> from xarray import zeros_like
>>> zfine = x + y + zeros_like(fine_ts.timeslice, dtype=int)
>>> zrough = convert_timeslice(zfine, rough_ts)
>>> zrough.round(6)
<xarray.DataArray (timeslice: 2, a: 3, b: 3)>
array([[[17.142857, 17.142857, 20.      ],
        [ 8.571429,  8.571429, 11.428571],
        [11.428571, 11.428571, 14.285714]],

       [[ 6.857143,  6.857143,  8.      ],
        [ 3.428571,  3.428571,  4.571429],
        [ 4.571429,  4.571429,  5.714286]]])
Coordinates:
  * timeslice  (timeslice) MultiIndex
  - semester   (timeslice) object 'allyear' 'allyear'
  - week       (timeslice) object 'weekday' 'weekend'
  - day        (timeslice) object 'allday' 'allday'
  * a          (a) int64 1 2 3
  * b          (b) <U1 'd' 'e' 'f'

We can check that nothing has been added to z (the quantity is EXTENSIVE by default):

>>> from numpy import all
>>> all(zfine.sum("timeslice").round(6) == zrough.sum("timeslice").round(6))
<xarray.DataArray ()>
array(True)

Or that the ratio of weekdays to weekends makes sense: >>> weekdays = ( … zrough … .unstack(“timeslice”) … .sel(week=”weekday”) … .stack(timeslice=[“semester”, “day”]) … .squeeze() … ) >>> weekend = ( … zrough … .unstack(“timeslice”) … .sel(week=”weekend”) … .stack(timeslice=[“semester”, “day”]) … .squeeze() … ) >>> bool(all((weekend * 5).round(6) == (weekdays * 2).round(6))) True

reference_timeslice(settings, level_names=('month', 'day', 'hour'), name='timeslice')[source]

Reads reference timeslice from toml like input.

Parameters:
  • settings – A dictionary of nested dictionaries or a string that toml will interpret as such. The nesting specifies different levels of the timeslice. If a dictionary and it contains “timeslices” key, then the associated value is used as the root dictionary. Ultimately, the most nested values should be relative weights for each slice in the timeslice, e.g. the corresponding number of hours.

  • level_names – Hints indicating the names of each level. Can also be given a “level_names” key in settings.

  • name – name of the reference array

Returns:

A DataArray with dimension timeslice and values representing the relative weight of each timeslice.

Example

>>> from muse.timeslices import reference_timeslice
>>> reference_timeslice(
...     """
...     [timeslices]
...     spring.weekday = 5
...     spring.weekend = 2
...     autumn.weekday = 5
...     autumn.weekend = 2
...     winter.weekday = 5
...     winter.weekend = 2
...     summer.weekday = 5
...     summer.weekend = 2
...     level_names = ["season", "week"]
...     """
... )
<xarray.DataArray (timeslice: 8)>
array([5, 2, 5, 2, 5, 2, 5, 2])
Coordinates:
  * timeslice  (timeslice) MultiIndex
  - season     (timeslice) object 'spring' 'spring' ... 'summer' 'summer'
  - week       (timeslice) object 'weekday' 'weekend' ... 'weekday' 'weekend'
represent_hours(timeslices, nhours=8765.82)[source]

Number of hours per timeslice.

Parameters:
  • timeslices – The timeslice for which to compute the number of hours

  • nhours – The total number of hours represented in the timeslice. Defaults to the average number of hours in year.

setup_module(settings)[source]

Sets up module singletons.

timeslice_projector(x, finest=None, transforms=None)[source]

Project time-slice to standardized finest time-slices.

Returns a matrix from the input timeslice x to the finest timeslice, using the input transforms. The latter are a set of transforms that map indices from one timeslice to indices in another.

Example

Lets define the following timeslices and aggregates:

>>> toml = """
...     ["timeslices"]
...     winter.weekday.day = 5
...     winter.weekday.night = 5
...     winter.weekend.day = 2
...     winter.weekend.night = 2
...     winter.weekend.dusk = 1
...     summer.weekday.day = 5
...     summer.weekday.night = 5
...     summer.weekend.day = 2
...     summer.weekend.night = 2
...     summer.weekend.dusk = 1
...     level_names = ["semester", "week", "day"]
...     aggregates.allday = ["day", "night"]
... """
>>> from muse.timeslices import (
...     reference_timeslice,  aggregate_transforms
... )
>>> ref = reference_timeslice(toml)
>>> transforms = aggregate_transforms(toml, ref)
>>> from pandas import MultiIndex
>>> input_ts = DataArray(
...     [1, 2, 3],
...     coords={
...         "timeslice": MultiIndex.from_tuples(
...             [
...                 ("winter", "weekday", "allday"),
...                 ("winter", "weekend", "dusk"),
...                 ("summer", "weekend", "night"),
...             ],
...             names=ref.get_index("timeslice").names,
...         ),
...     },
...     dims="timeslice"
... )
>>> input_ts
<xarray.DataArray (timeslice: 3)>
array([1, 2, 3])
Coordinates:
  * timeslice  (timeslice) MultiIndex
  - semester   (timeslice) object 'winter' 'winter' 'summer'
  - week       (timeslice) object 'weekday' 'weekend' 'weekend'
  - day        (timeslice) object 'allday' 'dusk' 'night'

The input timeslice does not have to be complete. In any case, we can now compute a transform, i.e. a matrix that will take this timeslice and transform it to the equivalent times in the finest timeslice:

>>> from muse.timeslices import timeslice_projector
>>> timeslice_projector(input_ts, ref, transforms)
<xarray.DataArray 'projector' (finest_timeslice: 10, timeslice: 3)>
array([[1, 0, 0],
       [1, 0, 0],
       [0, 0, 0],
       [0, 0, 0],
       [0, 1, 0],
       [0, 0, 0],
       [0, 0, 0],
       [0, 0, 0],
       [0, 0, 1],
       [0, 0, 0]])
Coordinates:
  * finest_timeslice  (finest_timeslice) MultiIndex
  - finest_semester   (finest_timeslice) object 'winter' 'winter' ... 'summer'
  - finest_week       (finest_timeslice) object 'weekday' ... 'weekend'
  - finest_day        (finest_timeslice) object 'day' 'night' ... 'night' 'dusk'
  * timeslice         (timeslice) MultiIndex
  - semester          (timeslice) object 'winter' 'winter' 'summer'
  - week              (timeslice) object 'weekday' 'weekend' 'weekend'
  - day               (timeslice) object 'allday' 'dusk' 'night'

It is possible to give as input an array which does not have a timeslice of its own:

>>> nots = DataArray([5.0, 1.0, 2.0], dims="a", coords={'a': [1, 2, 3]})
>>> timeslice_projector(nots, ref, transforms).T
<xarray.DataArray (timeslice: 1, finest_timeslice: 10)>
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])
Coordinates:
  * finest_timeslice  (finest_timeslice) MultiIndex
  - finest_semester   (finest_timeslice) object 'winter' 'winter' ... 'summer'
  - finest_week       (finest_timeslice) object 'weekday' ... 'weekend'
  - finest_day        (finest_timeslice) object 'day' 'night' ... 'night' 'dusk'
Dimensions without coordinates: timeslice

12.8.2. Commodities

Methods and types around commodities.

class CommodityUsage(value)[source]

Flags to specify the different kinds of commodities.

For details on how enum’s work, see python’s documentation. In practice, CommodityUsage centralizes in one place the different kinds of commodities that are meaningful to the generalized sector, e.g. commodities that are consumed by the sector, and commodities that produced by the sectors, as well commodities that are, somehow, environmental.

With the exception of CommodityUsage.OTHER, flags can be combined in any fashion. CommodityUsage.PRODUCT | CommodityUsage.CONSUMABLE is a commodity that is both consumed and produced by a sector. CommodityUsage.ENVIRONMENTAL | CommodityUsage.ENERGY | CommodityUsage.CONSUMABLE is an environmental energy commodity consumed by the sector.

CommodityUsage.OTHER is an alias for no flag. It is meant for commodities that should be ignored by the sector.

CONSUMABLE = 1

Commodity which can be consumed by the sector.

ENERGY = 8

Commodity which is a fuel for this or another sector.

ENVIRONMENTAL = 4

Commodity which is a pollutant.

OTHER = 0

Not relevant for current sector.

PRODUCT = 2

Commodity which can be produced by the sector.

check_usage(data, flag, match='all')[source]

Match usage flags with input data array.

Parameters:
  • data – sequence for which to match flags elementwise.

  • flag – flag or combination of flags to match. The input can be a string, such as “product | environmental”, or a CommodityUsage instance. Defaults to “other”.

  • match – one of: - “all”: should all flag match. Default. - “any”, should match at least one flags. - “exact”, should match each flag and nothing else.

Examples

>>> from muse.commodities import CommodityUsage, check_usage
>>> data = [
...     CommodityUsage.OTHER,
...     CommodityUsage.PRODUCT,
...     CommodityUsage.ENVIRONMENTAL | CommodityUsage.PRODUCT,
...     CommodityUsage.ENVIRONMENTAL,
... ]

Matching “all”:

>>> check_usage(data, CommodityUsage.PRODUCT).tolist()
[False, True, True, False]
>>> check_usage(data, CommodityUsage.ENVIRONMENTAL).tolist()
[False, False, True, True]
>>> check_usage(
...     data, CommodityUsage.ENVIRONMENTAL | CommodityUsage.PRODUCT
... ).tolist()
[False, False, True, False]

Matching “any”:

>>> check_usage(data, CommodityUsage.PRODUCT, match="any").tolist()
[False, True, True, False]
>>> check_usage(data, CommodityUsage.ENVIRONMENTAL, match="any").tolist()
[False, False, True, True]
>>> check_usage(data, "environmental | product", match="any").tolist()
[False, True, True, True]

Matching “exact”:

>>> check_usage(data, "PRODUCT", match="exact").tolist()
[False, True, False, False]
>>> check_usage(data, CommodityUsage.ENVIRONMENTAL, match="exact").tolist()
[False, False, False, True]
>>> check_usage(data, "ENVIRONMENTAL | PRODUCT", match="exact").tolist()
[False, False, True, False]

Finally, checking no flags has been set can be done with:

>>> check_usage(data, CommodityUsage.OTHER, match="exact").tolist()
[True, False, False, False]
>>> check_usage(data, None, match="exact").tolist()
[True, False, False, False]
is_consumable(data)[source]

Any consumable.

Examples

>>> from muse.commodities import CommodityUsage, is_consumable
>>> data = [
...     CommodityUsage.CONSUMABLE,
...     CommodityUsage.PRODUCT,
...     CommodityUsage.ENVIRONMENTAL,
...     CommodityUsage.PRODUCT | CommodityUsage.CONSUMABLE,
...     CommodityUsage.ENVIRONMENTAL | CommodityUsage.PRODUCT,
... ]
>>> is_consumable(data).tolist()
[True, False, False, True, False]
is_enduse(data)[source]

Non-environmental product.

Examples

>>> from muse.commodities import CommodityUsage, is_enduse
>>> data = [
...     CommodityUsage.CONSUMABLE,
...     CommodityUsage.PRODUCT,
...     CommodityUsage.ENVIRONMENTAL,
...     CommodityUsage.PRODUCT | CommodityUsage.CONSUMABLE,
...     CommodityUsage.ENVIRONMENTAL | CommodityUsage.PRODUCT,
... ]
>>> is_enduse(data).tolist()
[False, True, False, True, False]
is_fuel(data)[source]

Any consumable energy.

Examples

>>> from muse.commodities import CommodityUsage, is_fuel
>>> data = [
...     CommodityUsage.CONSUMABLE,
...     CommodityUsage.PRODUCT,
...     CommodityUsage.ENERGY,
...     CommodityUsage.ENERGY | CommodityUsage.CONSUMABLE,
...     CommodityUsage.ENERGY | CommodityUsage.CONSUMABLE
...         | CommodityUsage.ENVIRONMENTAL,
...     CommodityUsage.ENERGY | CommodityUsage.CONSUMABLE
...         | CommodityUsage.PRODUCT,
...     CommodityUsage.ENERGY | CommodityUsage.PRODUCT,
... ]
>>> is_fuel(data).tolist()
[False, False, False, True, True, True, False]
is_material(data)[source]

Any non-energy non-environmental consumable.

Examples

>>> from muse.commodities import CommodityUsage, is_material
>>> data = [
...     CommodityUsage.CONSUMABLE,
...     CommodityUsage.PRODUCT,
...     CommodityUsage.ENERGY,
...     CommodityUsage.ENERGY | CommodityUsage.CONSUMABLE,
...     CommodityUsage.CONSUMABLE | CommodityUsage.ENVIRONMENTAL,
...     CommodityUsage.ENERGY | CommodityUsage.CONSUMABLE
...         | CommodityUsage.PRODUCT,
...     CommodityUsage.CONSUMABLE | CommodityUsage.PRODUCT,
... ]
>>> is_material(data).tolist()
[True, False, False, False, False, False, True]
is_other(data)[source]

No flags are set.

Examples

>>> from muse.commodities import CommodityUsage, is_other
>>> data = [
...     CommodityUsage.OTHER,
...     CommodityUsage.PRODUCT,
...     CommodityUsage.PRODUCT | CommodityUsage.OTHER,
... ]
>>> is_other(data).tolist()
[True, False, False]
is_pollutant(data)[source]

Environmental product.

Examples

>>> from muse.commodities import CommodityUsage, is_pollutant
>>> data = [
...     CommodityUsage.CONSUMABLE,
...     CommodityUsage.PRODUCT,
...     CommodityUsage.ENVIRONMENTAL,
...     CommodityUsage.PRODUCT | CommodityUsage.CONSUMABLE,
...     CommodityUsage.ENVIRONMENTAL | CommodityUsage.PRODUCT,
... ]
>>> is_pollutant(data).tolist()
[False, False, False, False, True]

12.8.3. Regression functions

Functions and functors to compute macro-drivers.

class Exponential(interpolation='linear', base_year=2010, **kwargs)[source]

Regression function: exponential

This functor is a regression function registered with MUSE as ‘exponential’.

class ExponentialAdj(interpolation='linear', base_year=2010, **kwargs)[source]

Regression function: exponentialadj

This functor is a regression function registered with MUSE as ‘exponentialadj’.

class Linear(interpolation='linear', base_year=2010, **kwargs)[source]

a * population + b * (gdp - gdp[2010]/population[2010] * population)

class Logistic(interpolation='linear', base_year=2010, **kwargs)[source]

Regression function: logistic

This functor is a regression function registered with MUSE as ‘logistic’.

class LogisticSigmoid(interpolation='linear', base_year=2010, **kwargs)[source]

Regression function: logisticsigmoid

This functor is a regression function registered with MUSE as ‘logisticsigmoid’.

class Loglog(interpolation='linear', base_year=2010, **kwargs)[source]

Regression function: loglog

This functor is a regression function registered with MUSE as ‘loglog’.

endogenous_demand(regression_parameters, drivers, sector=None, **kwargs)[source]

Endogenous demand based on macro drivers and regression parameters.

factory(regression_parameters, sector=None)[source]

Creates regression functor from standard MUSE data for given sector.

register_regression(Functor=None, name=None)[source]

Registers a functor with MUSE regressions.

Regression functors are registered with MUSE so that the functors can be called easily on created.

functor name that the functor is registered with defaults to the snake_case version of the functor name. However, it can also be specified explicitly as a keyword argument. In any case, it must be unique amongst all registered regression functor.

12.8.4. Functionality Registration

Registrators that allow pluggable data to logic transforms.

registrator(decorator=None, registry=None, logname=None, loglevel='Debug')[source]

A decorator to create a decorator that registers functions with MUSE.

This is a decorator that takes another decorator as an argument. Hence it returns a decorator. It simplifies and standardizes creating decorators to register functions with muse.

The registrator expects as non-optional keyword argument a registry where the resulting decorator will register functions.

Furthermore, the final function (the one passed to the decorator passed to this function) will emit a standardized log-call.

Example

At it’s simplest, creating a registrator and registering happens by first declaring a registry.

>>> REGISTRY = {}

In general, it will be a variable owned directly by a module, hence the all-caps. Creating the registrator then follows:

>>> from muse.registration import registrator
>>> @registrator(registry=REGISTRY, logname='my stuff',
...              loglevel='Info')
... def register_mystuff(function):
...     return function

This registrator does nothing more than register the function. A more interesting example is given below. Then a function can be registered:

>>> @register_mystuff(name='yoyo')
... def my_registered_function(a, b):
...     return a + b

The argument ‘yoyo’ is optional. It adds aliases for the function in the registry. In any case, functions are registered with default aliases corresponding to standard name variations, e.g. CamelCase, camelCase, and kebab-case, as illustrated below:

>>> REGISTRY['my_registered_function'] is my_registered_function
True
>>> REGISTRY['my-registered-function'] is my_registered_function
True
>>> REGISTRY['yoyo'] is my_registered_function
True

A more interesting case would involve the registrator automatically adding functionality to the input function. For instance, the inputs could be manipulated and the result of the function could be automatically transformed to a string:

>>> from muse.registration import registrator
>>> @registrator(registry=REGISTRY)
... def register_mystuff(function):
...     from functools import wraps
...
...     @wraps(function)
...     def decorated(a, b) -> str:
...         result = function(2 * a, 3 * b)
...         return str(result)
...
...     return decorated
>>> @register_mystuff
... def other(a, b):
...     return a + b
>>> isinstance(REGISTRY['other'](-3, 2), str)
True
>>> REGISTRY['other'](-3, 2) == "0"
True

12.8.5. Utilities

Collection of functions and stand-alone algorithms.

agent_concatenation(data, dim='asset', name='agent', fill_value=0)[source]

Concatenates input map along given dimension.

Example

Lets create sets of random assets to work with. We set the seed so that this test can be reproduced exactly.

>>> from muse.examples import random_agent_assets
>>> rng = np.random.default_rng(1234)
>>> assets = {i: random_agent_assets(rng) for i in range(5)}

The concatenation will create a new dataset (or datarray) combining all the inputs along the dimension “asset”. The origin of each datum is retained in a new coordinate “agent” with dimension “asset”.

>>> from muse.utilities import agent_concatenation
>>> aggregate = agent_concatenation(assets)
>>> aggregate
<xarray.Dataset>
Dimensions:     (asset: 19, year: 12)
Coordinates:
  * year        (year) int64 2033 2035 2036 2037 2039 ... 2046 2047 2048 2049
    technology  (asset) <U9 'oven' 'stove' 'oven' ... 'stove' 'oven' 'thermomix'
    region      (asset) <U9 'Brexitham' 'Brexitham' ... 'Brexitham' 'Brexitham'
    agent       (asset) ... 0 0 0 0 0 1 1 1 2 2 2 2 3 3 3 4 4 4 4
    installed   (asset) int64 2030 2025 2030 2010 2030 ... 2025 2030 2010 2025
Dimensions without coordinates: asset
Data variables:
    capacity    (asset, year) float64 26.0 26.0 26.0 56.0 ... 62.0 62.0 62.0

Note that the dtype of the capacity has changed from integers to floating points. This is due to how xarray performs the operation.

We can check that all the data from each agent is indeed present in the aggregate.

>>> for agent, inventory in assets.items():
...    assert (aggregate.sel(asset=aggregate.agent == agent) == inventory).all()

However, it should be noted that the data is not always strictly equivalent: dimensions outside of “assets” (most notably “year”) will include all points from all agents. Missing values for the “year” dimension are forward filled (and backfilled with zeros). Others are left with “NaN”.

aggregate_technology_model(data, dim='asset', drop='installed')[source]

Aggregate together assets with the same installation year.

The assets of a given agent, region, and technology but different installation year are grouped together and summed over.

Example

We first create a random set of agent assets and aggregate them. Some of these agents own assets from the same technology but potentially with different installation year. This function will aggregate together all assets of a given agent with same technology.

>>> from muse.examples import random_agent_assets
>>> from muse.utilities import agent_concatenation, aggregate_technology_model
>>> rng = np.random.default_rng(1234)
>>> agent_assets = {i: random_agent_assets(rng) for i in range(5)}
>>> assets = agent_concatenation(agent_assets)
>>> reduced = aggregate_technology_model(assets)

We can check that the tuples (agent, technology) are unique (each agent works in a single region):

>>> ids = list(zip(reduced.agent.values, reduced.technology.values))
>>> assert len(set(ids)) == len(ids)

And we can check they correspond to the right summation:

>>> for agent, technology in set(ids):
...     techsel = assets.technology == technology
...     agsel = assets.agent == agent
...     expected = assets.sel(asset=techsel & agsel).sum("asset")
...     techsel = reduced.technology == technology
...     agsel = reduced.agent == agent
...     actual = reduced.sel(asset=techsel & agsel)
...     assert len(actual.asset) == 1
...     assert (actual == expected).all()
avoid_repetitions(data, dim='year')[source]

list of years such that there is no repetition in the data.

It removes the central year of any three consecutive years where all data is the same. This means the original data can be reobtained via a linear interpolation or a forward fill.

The first and last year are always preserved.

broadcast_techs(technologies, template, dimension='asset', interpolation='linear', installed_as_year=True, **kwargs)[source]

Broadcasts technologies to the shape of template in given dimension.

The dimensions of the technologies are fully explicit, in that each concept ‘technology’, ‘region’, ‘year’ (for year of issue) is a separate dimension. However, the dataset or data arrays representing other quantities, such as capacity, are often flattened out with coordinates ‘region’, ‘installed’, and ‘technology’ represented in a single ‘asset’ dimension. This latter representation is sparse if not all combinations of ‘region’, ‘installed’, and ‘technology’ are present, whereas the former representation makes it easier to select a subset of the same.

This function broadcast the first representation to the shape and coordinates of the second.

Parameters:
  • technologies – The dataset to broadcast

  • template – the dataset or data-array to use as a template

  • dimension – the name of the dimensiom from template over which to broadcast

  • interpolation – interpolation method used across year

  • installed_as_year – if the coordinate installed exists, then it is applied to the year dimension of the technologies dataset

  • kwargs – further arguments are used initial filters over the technologies dataset.

clean_assets(assets, years)[source]

Cleans up and prepares asset for current iteration.

  • adds current and forecast year by backfilling missing entries

  • removes assets for which there is no capacity now or in the future

coords_to_multiindex(data, dimension='asset')[source]

Creates a multi-index from flattened multiple coords.

filter_input(dataset, year=None, interpolation='linear', **kwargs)[source]

Filter inputs, taking care to interpolate years.

filter_with_template(data, template, asset_dimension='asset', **kwargs)[source]

Filters data to match template.

If the asset_dimension is present in template.dims, then the call is forwarded to broadcast_techs. Otherwise, the set of dimensions and indices in common between template and data are determined, and the resulting call is forwarded to filter_input.

Parameters:
  • data – Data to transform

  • template – Data from which to figure coordinates and dimensions

  • asset_dimension – Name of the dimension which if present indicates the format is that of an asset (see broadcast_techs)

  • kwargs – passed on to broadcast_techs or filter_input

Returns

data transformed to match the form of template

future_propagation(data, future, threshold=1e-12, dim='year')[source]

Propagates values into the future.

Example

Data should be an array with at least one dimension, “year”:

>>> coords = dict(year=list(range(2020, 2040, 5)), fuel=["gas", "coal"])
>>> data = xr.DataArray(
...     [list(range(4)), list(range(-5, -1))],
...     coords=coords,
...     dims=("fuel", "year")
... )

future is an array with exactly one year in its year coordinate, or that coordinate must correspond to a scalar. That one year should also be present in data.

>>> future = xr.DataArray(
...     [1.2, -3.95], coords=dict(fuel=coords['fuel'], year=2025), dims="fuel",
... )

This function propagates into data values from future, but only if those values differed for the current year beyond a given threshold:

>>> from muse.utilities import future_propagation
>>> future_propagation(data, future, threshold=0.1)
<xarray.DataArray (fuel: 2, year: 4)>
array([[ 0. ,  1.2,  1.2,  1.2],
       [-5. , -4. , -3. , -2. ]])
Coordinates:
  * year     (year) ... 2020 2025 2030 2035
  * fuel     (fuel) <U4 'gas' 'coal'

Above, the data for coal is not sufficiently different given the threshold. hence, the future values for coal remain as they where.

The dimensions of future do not have to match exactly those of data. Standard broadcasting is used if they do not match:

>>> future_propagation(data, future.sel(fuel="gas", drop=True), threshold=0.1)
<xarray.DataArray (fuel: 2, year: 4)>
array([[ 0. ,  1.2,  1.2,  1.2],
       [-5. ,  1.2,  1.2,  1.2]])
Coordinates:
  * year     (year) ... 2020 2025 2030 2035
  * fuel     (fuel) <U4 'gas' 'coal'
>>> future_propagation(data, future.sel(fuel="coal", drop=True), threshold=0.1)
<xarray.DataArray (fuel: 2, year: 4)>
array([[ 0.  , -3.95, -3.95, -3.95],
       [-5.  , -4.  , -3.  , -2.  ]])
Coordinates:
  * year     (year) ... 2020 2025 2030 2035
  * fuel     (fuel) <U4 'gas' 'coal'
lexical_comparison(objectives, binsize, order=None, bin_last=True)[source]

Lexical comparison over the objectives.

Lexical comparison operates by binning the objectives into bins of width binsize. Once binned, dimensions other than asset and technology are reduced by taking the max, e.g. the largest constraint. Finally, the objectives are ranked lexographically, in the order given by the parameters.

Parameters:
  • objectives – xr.Dataset containing the objectives to rank

  • binsize – bin size, minimization direction (+ -> minimize, - -> maximize), and (optionally) order of lexicographical comparison. The order is the one given binsize.data_vars if the argument order is None.

  • order – Optional array indicating the order in which to rank the tuples.

  • bin_last – Whether the last metric should be binned, or whether it should be left as a the type it already is (e.g. no flooring and no turning to integer.)

Result:

An array of tuples which can subsequently be compared lexicographically.

merge_assets(capa_a, capa_b, interpolation='linear', dimension='asset')[source]

Merge two capacity arrays.

multiindex_to_coords(data, dimension='asset')[source]

Flattens multi-index dimension into multi-coord dimension.

nametuple_to_dict(nametup)[source]

Transforms a nametuple of type GenericDict into an OrderDict.

reduce_assets(assets, coords=None, dim='asset', operation=None)[source]

Combine assets along given asset dimension.

This method simplifies combining assets across multiple agents, or combining assets across a given dimension. By default, it will sum together assets from the same region which have the same technology and the same installation date. In other words, assets are identified by the technology, installation year and region. The reduction happens over other possible coordinates, e.g. the owning agent.

More specifically, assets are often indexed using what xarray calls a dimension without coordinates. In practice, there are still coordinates associated with assets, e.g. technology and installed (installation year or version), but the value associated with these coordinates are not unique. There may be more than one asset with the same technology or installation year.

For instance, with assets per agent defined as \(A^{i, r}_o\), with \(i\) an agent index, \(r\) a region, \(o\) is the coordinates identified in coords, and \(T\) the transformation identified by operation, then this function computes:

\[R_{r, o} = T[\{A^{i, r}_o; \forall i\}]\]

If \(T\) is the sum operation, then:

\[R_{r, o} = \sum_i A^{i, r}_o\]

Example

Lets construct assets that do have duplicates assets. First we construct the dimensions, using fake data:

>>> data = xr.Dataset()
>>> data['year'] = 'year', [2010, 2015, 2017]
>>> data['installed'] = 'asset', [1990, 1991, 1991, 1990]
>>> data['technology'] = 'asset', ['a', 'b', 'b', 'c']
>>> data['region'] = 'asset', ['x', 'x', 'x', 'y']
>>> data = data.set_coords(('installed', 'technology', 'region'))

We can check there are duplicate assets in this coordinate system:

>>> processes = set(
...     zip(data.installed.values, data.technology.values, data.region.values)
... )
>>> len(processes) < len(data.asset)
True

Now we can easily create a fake two dimensional quantity per process and per year:

>>> data['capacity'] = ('year', 'asset'), np.arange(3 * 4).reshape(3, 4)

The point of reduce_assets is to aggregate assets that refer to the same process:

>>> reduce_assets(data.capacity)
<xarray.DataArray 'capacity' (year: 3, asset: 3)>
array([[ 0,  3,  3],
       [ 4,  7, 11],
       [ 8, 11, 19]])
Coordinates:
  * year        (year) ... 2010 2015 2017
    installed   (asset) ... 1990 1990 1991
    technology  (asset) <U1 'a' 'c' 'b'
    region      (asset) <U1 'x' 'y' 'x'
Dimensions without coordinates: asset

We can also specify explicitly which coordinates in the ‘asset’ dimension should be reduced, and how:

>>> reduce_assets(
...     data.capacity,
...     coords=('technology', 'installed'),
...     operation = lambda x: x.mean(dim='asset')
... )
<xarray.DataArray 'capacity' (year: 3, asset: 3)>
array([[ 0. ,  1.5,  3. ],
       [ 4. ,  5.5,  7. ],
       [ 8. ,  9.5, 11. ]])
Coordinates:
  * year        (year) ... 2010 2015 2017
    technology  (asset) <U1 'a' 'b' 'c'
    installed   (asset) ... 1990 1991 1990
Dimensions without coordinates: asset
tupled_dimension(array, axis)[source]

Transforms one axis into a tuples.

12.8.6. Examples

Example models and datasets.

Helps create and run small standard models from the command-line or directly from python.

To run from the command-line:

python -m muse --model default

Other models may be available. Check the command-line help:

python -m muse --help

The same models can be instantiated in a python script as follows:

from muse import example
model = example.model("default")
model.run()
model(name='default')[source]

Fully constructs a given example model.

technodata(sector, model='default')[source]

Technology for a sector of a given example model.