Indices and tables

Language

pyretic.core.language

Inheritance diagram of pyretic.core.language

class pyretic.core.language.Policy

Bases: object

Top-level abstract class for policies. All Pyretic policies have methods for

  • evaluating on a single packet.
  • compilation to a switch Classifier
eval(pkt)

evaluate this policy on a single packet

Parameters:pkt (Packet) – the packet on which to be evaluated
Return type:set Packet
compile()

Produce a Classifier for this policy

Return type:Classifier
__add__(pol)

The parallel composition operator.

Parameters:pol (Policy) – the Policy to the right of the operator
Return type:Parallel
__rshift__(other)

The sequential composition operator.

Parameters:pol (Policy) – the Policy to the right of the operator
Return type:Sequential
__eq__(other)

Syntactic equality.

__ne__(other)

Syntactic inequality.

__weakref__ None

list of weak references to the object (if defined)

class pyretic.core.language.Filter

Bases: pyretic.core.language.Policy

Abstact class for filter policies. A filter Policy will always either

  • pass packets through unchanged
  • drop them

No packets will ever be modified by a Filter.

eval(pkt)

evaluate this policy on a single packet

Parameters:pkt (Packet) – the packet on which to be evaluated
Return type:set Packet
__or__(pol)

The Boolean OR operator.

Parameters:pol (Filter) – the filter Policy to the right of the operator
Return type:Union
__and__(pol)

The Boolean AND operator.

Parameters:pol (Filter) – the filter Policy to the right of the operator
Return type:Intersection
__sub__(pol)

The Boolean subtraction operator.

Parameters:pol (Filter) – the filter Policy to the right of the operator
Return type:Difference
__invert__()

The Boolean negation operator.

Parameters:pol (Filter) – the filter Policy to the right of the operator
Return type:negate
class pyretic.core.language.match(*args, **kwargs)

Bases: pyretic.core.language.Filter

Match on all specified fields. Matched packets are kept, non-matched packets are dropped.

Parameters:
  • *args

    field matches in argument format

  • **kwargs

    field matches in keyword-argument format

eval(pkt)

evaluate this policy on a single packet

Parameters:pkt (Packet) – the packet on which to be evaluated
Return type:set Packet
compile()

Produce a Classifier for this policy

Return type:Classifier
class pyretic.core.language.modify(*args, **kwargs)

Bases: pyretic.core.language.Policy

Modify on all specified fields to specified values.

Parameters:
  • *args

    field assignments in argument format

  • **kwargs

    field assignments in keyword-argument format

eval(pkt)

evaluate this policy on a single packet

Parameters:pkt (Packet) – the packet on which to be evaluated
Return type:set Packet
compile()

Produce a Classifier for this policy

Return type:Classifier
class pyretic.core.language.Query

Bases: pyretic.core.language.Filter

Abstract class representing a data structure into which packets (conceptually) go and with which callbacks can register.

eval(pkt)

evaluate this policy on a single packet

Parameters:pkt (Packet) – the packet on which to be evaluated
Return type:set Packet
class pyretic.core.language.FwdBucket

Bases: pyretic.core.language.Query

Class for registering callbacks on individual packets sent to the controller.

compile()

Produce a Classifier for this policy

Return type:Classifier
class pyretic.core.language.CountBucket

Bases: pyretic.core.language.Query

Class for registering callbacks on counts of packets sent to the controller.

eval(pkt)

evaluate this policy on a single packet

Parameters:pkt (Packet) – the packet on which to be evaluated
Return type:set Packet
compile()

Produce a Classifier for this policy

Return type:Classifier
start_update()

Use a condition variable to mediate access to bucket state as it is being updated.

Why condition variables and not locks? The main reason is that the state update doesn’t happen in just a single function call here, since the runtime processes the classifier rule by rule and buckets may be touched in arbitrary order depending on the policy. They’re not all updated in a single function call. In that case,

(1) Holding locks across function calls seems dangerous and non-modular (in my opinion), since we need to be aware of this across a large function, and acquiring locks in different orders at different points in the code can result in tricky deadlocks (there is another lock involved in protecting bucket updates in runtime).

(2) The “with” semantics in python is clean, and splitting that into lock.acquire() and lock.release() calls results in possibly replicated failure handling code that is boilerplate.

add_match(m)

Add a match m to list of classifier rules to be queried for counts.

add_pull_stats(fun)

Point to function that issues stats queries in the runtime.

pull_stats()

Issue stats queries from the runtime

handle_flow_stats_reply(switch, flow_stats)

Given a flow_stats_reply from switch s, collect only those counts which are relevant to this bucket.

Very simple processing for now: just collect all packet and byte counts from rules that have a match that is in the set of matches this bucket is interested in.

class pyretic.core.language.CombinatorPolicy(policies=[])

Bases: pyretic.core.language.Policy

Abstract class for policy combinators.

Parameters:policies (list Policy) – the policies to be combined.
class pyretic.core.language.negate(policies=[])

Bases: pyretic.core.language.CombinatorPolicy, pyretic.core.language.Filter

Combinator that negates the input policy.

Parameters:policies (list Filter) – the policies to be negated.
eval(pkt)

evaluate this policy on a single packet

Parameters:pkt (Packet) – the packet on which to be evaluated
Return type:set Packet
compile()

Produce a Classifier for this policy

Return type:Classifier
class pyretic.core.language.parallel(policies=[])

Bases: pyretic.core.language.CombinatorPolicy

Combinator for several policies in parallel.

Parameters:policies (list Policy) – the policies to be combined.
eval(pkt)

evaluates to the set union of the evaluation of self.policies on pkt

Parameters:pkt (Packet) – the packet on which to be evaluated
Return type:set Packet
compile()

Produce a Classifier for this policy

Return type:Classifier
class pyretic.core.language.union(policies=[])

Bases: pyretic.core.language.parallel, pyretic.core.language.Filter

Combinator for several filter policies in parallel.

Parameters:policies (list Filter) – the policies to be combined.
class pyretic.core.language.sequential(policies=[])

Bases: pyretic.core.language.CombinatorPolicy

Combinator for several policies in sequence.

Parameters:policies (list Policy) – the policies to be combined.
eval(pkt)

evaluates to the set union of each policy in self.policies on each packet in the output of the previous. The first policy in self.policies is evaled on pkt.

Parameters:pkt (Packet) – the packet on which to be evaluated
Return type:set Packet
compile()

Produce a Classifier for this policy

Return type:Classifier
class pyretic.core.language.intersection(policies=[])

Bases: pyretic.core.language.sequential, pyretic.core.language.Filter

Combinator for several filter policies in sequence.

Parameters:policies (list Filter) – the policies to be combined.
class pyretic.core.language.DerivedPolicy(policy=identity)

Bases: pyretic.core.language.Policy

Abstract class for a policy derived from another policy.

Parameters:policy (Policy) – the internal policy (assigned to self.policy)
eval(pkt)

evaluates to the output of self.policy.

Parameters:pkt (Packet) – the packet on which to be evaluated
Return type:set Packet
compile()

Produce a Classifier for this policy

Return type:Classifier
class pyretic.core.language.difference(f1, f2)

Bases: pyretic.core.language.DerivedPolicy, pyretic.core.language.Filter

The difference between two filter policies..

Parameters:
  • f1 (Filter) – the minuend
  • f2 (Filter) – the subtrahend
class pyretic.core.language.if_(pred, t_branch, f_branch=identity)

Bases: pyretic.core.language.DerivedPolicy

if pred holds, t_branch, otherwise f_branch.

Parameters:
  • pred (Policy) – the predicate
  • t_branch – the true branch policy
  • f_branch – the false branch policy
class pyretic.core.language.fwd(outport)

Bases: pyretic.core.language.DerivedPolicy

fwd out a specified port.

Parameters:outport (int) – the port on which to forward.
class pyretic.core.language.xfwd(outport)

Bases: pyretic.core.language.DerivedPolicy

fwd out a specified port, unless the packet came in on that same port. (Semantically equivalent to OpenFlow’s forward action

Parameters:outport (int) – the port on which to forward.
class pyretic.core.language.DynamicPolicy(policy=drop)

Bases: pyretic.core.language.DerivedPolicy

Abstact class for dynamic policies. The behavior of a dynamic policy changes each time self.policy is reassigned.

class pyretic.core.language.DynamicFilter(policy=drop)

Bases: pyretic.core.language.DynamicPolicy, pyretic.core.language.Filter

Abstact class for dynamic filter policies. The behavior of a dynamic filter policy changes each time self.policy is reassigned.

class pyretic.core.language.flood

Bases: pyretic.core.language.DynamicPolicy

Policy that floods packets on a minimum spanning tree, recalculated every time the network is updated (set_network).

class pyretic.core.language.ingress_network

Bases: pyretic.core.language.DynamicFilter

Returns True if a packet is located at a (switch,inport) pair entering the network, False otherwise.

class pyretic.core.language.egress_network

Bases: pyretic.core.language.DynamicFilter

Returns True if a packet is located at a (switch,outport) pair leaving the network, False otherwise.

class pyretic.core.language.Rule(m, acts)

Bases: object

A rule contains a filter and the parallel composition of zero or more Pyretic actions.

__eq__(other)

Based on syntactic equality of policies.

__ne__(other)

Based on syntactic equality of policies.

eval(in_pkt)

If this rule matches the packet, then return the union of the sets of packets produced by the actions. Otherwise, return None.

__weakref__ None

list of weak references to the object (if defined)

class pyretic.core.language.Classifier(new_rules=[])

Bases: object

A classifier contains a list of rules, where the order of the list implies the relative priorities of the rules. Semantically, classifiers are functions from packets to sets of packets, similar to OpenFlow flow tables.

__eq__(other)

Based on syntactic equality of policies.

__ne__(other)

Based on syntactic equality of policies.

__weakref__ None

list of weak references to the object (if defined)

eval(in_pkt)

Evaluate against each rule in the classifier, starting with the highest priority. Return the set of packets resulting from applying the actions of the first rule that matches.

pyretic.library.query

class pyretic.lib.query.LimitFilter(limit=None, group_by=[])

Bases: pyretic.core.language.DynamicFilter

A DynamicFilter that matches the first limit packets in a specified grouping.

Parameters:
  • limit (int) – the number of packets to be matched in each grouping.
  • group_by (list string) – the fields by which to group packets.
class pyretic.lib.query.packets(limit=None, group_by=[])

Bases: pyretic.core.language.DerivedPolicy

A FwdBucket preceeded by a LimitFilter.

Parameters:
  • limit (int) – the number of packets to be matched in each grouping.
  • group_by (list string) – the fields by which to group packets.
class pyretic.lib.query.AggregateFwdBucket(interval, group_by=[])

Bases: pyretic.core.language.FwdBucket

An abstract FwdBucket which calls back all registered routines every interval seconds (can take positive fractional values) with an aggregate value/dict. If group_by is empty, registered routines are called back with a single aggregate value. Otherwise, group_by defines the set of headers used to group counts which are then returned as a dictionary.

class pyretic.lib.query.count_packets(interval, group_by=[])

Bases: pyretic.lib.query.AggregateFwdBucket

AggregateFwdBucket that calls back with aggregate count of packets.

class pyretic.lib.query.count_bytes(interval, group_by=[])

Bases: pyretic.lib.query.AggregateFwdBucket

AggregateFwdBucket that calls back with aggregate bytesize of packets.

Runtime

pyretic.core.runtime

class pyretic.core.runtime.Runtime(backend, main, kwargs, mode='interpreted', verbosity='normal')

Bases: object

The Runtime system. Includes packet handling, compilation to OF switches, topology maintenance, dynamic update of policies, querying support, etc.

Parameters:
  • backend (Backend) – handles the connection to switches
  • main (pyretic program (.py)) – the program to run
  • kwargs (dict from strings to values) – arguments to main
  • mode (string) – one of interpreted/i, reactive0/r0, proactive0/p0
  • verbosity (string) – one of low, normal, high, please-make-it-stop
handle_packet_in(concrete_pkt)

The packet interpreter.

Parameters:concrete_packet – the packet to be interpreted.
handle_policy_change()

Updates runtime behavior (both interpreter and switch classifiers) some sub-policy in self.policy changes.

handle_network_change()

Updates runtime behavior (both interpreter and switch classifiers) when the concrete network topology changes.

update_switches(classifier)

Updates switch tables based on input classifier

Parameters:classifier (Classifier) – the input classifier
update_dynamic_sub_pols()

Updates the set of active dynamic sub-policies in self.policy

reactive0_install(in_pkt, out_pkts)

Reactively installs switch table entries based on a given policy evaluation.

Parameters:
  • in_pkt (Packet) – the input on which the policy was evaluated
  • out_pkts (set Packet) – the output of the evaluation
match_on_all_fields(pkt)

Produces a concrete predicate exactly matching a given packet.

Parameters:pkt (Packet) – the packet to match
Returns:an exact-match predicate
Return type:dict of strings to values
match_on_all_fields_rule_tuple(pkt_in, pkts_out)

Produces a rule tuple exactly matching a given packet and outputing a given set of packets..

Parameters:
  • pkt_in (Packet) – the input packet
  • pkts_out (set Packet) – the output packets
Returns:

an exact-match (microflow) rule

Return type:

(dict of strings to values, int, list int)

install_classifier(classifier)

Proactively installs switch table entries based on the input classifier

Parameters:classifier (Classifier) – the input classifer
pull_stats_for_bucket(bucket)

Returns a function that can be used by counting buckets to issue queries from the runtime.

pyretic.core.network

class pyretic.core.network.Network(topology=None)

Bases: object

Abstract class for networks

Table Of Contents

This Page