The processes control table (bs/processes_ctrl)¶
The processes control table is the heart of the project tasks INs and OUTs, it tells each applications how to work on the current task, which file type should be loaded from the dependencies of this current task, and how should the dependencies get loaded inside the applications, what are the loading command… etc, all this and more happens according to expressions written in the table columns for each pipeline process for each supported application and all these expressions gets translated and resolved internally using the NReal nreal_brain module, the columns in the bs/processes_ctrl table are the following:
order¶
if the process order for both current_process output and the input of the output process are equal, then the status for both the output_process and its input will be affected. if order is not equal, output process status will not get affected if it has an input that will be affected.
(ie., in design->rawModel->model, approving design will update model only if rawModel task was not created, but if was there, the model status will not get updated as long as its ‘rawModel’ input will be updated due to the same ‘design’ process status change. (Means, updating a task status can update only one upstream task unless all the upstream tasks are having the same order, then all of them will get updated).
automate_status¶
if false, then this process will not update any upstream tasks statuses.
approved_affected¶
when approved_affected is ‘False’, this task status will get updated only if its status was ‘Assignment’ otherwise, it will remain as it is. (i.e., animation process doesn’t update to ‘revisit’ when the character rig gets updated and approved - This prevents approved animation that worked fine with the old rig from getting updated).
If approved_affected is ‘True’, then this task status will get updated according the default rule explained above
in (1: order).
all_inputs_needed¶
When this flag is ‘True’, this process will not be set to ‘Pending’ until all its inputs are ‘Approved’.
Also, when this flag is ‘True’ for an output_process and its status is ‘Pending’ (means it was ready after all in approved), then modifying an input from its inputs will change it from ‘Pending’ to ‘Waiting’, so the user knows that this task is not ready any more but ‘Waiting’ for downstream dependencies to be satisfied.
deps_req_ratio¶
The percentage of completion of the dependencies of this process when the process can start. for example: if the rig deps_req_ratio is 0.7, this means that the rig process dependencies requirement ratio is 70% (in other words, the rigging process can start when 70% of its deps are satisfied).
Notes:
- deps_req_ratio of 1.0 is equivalent to all_inputs_needed
- if deps_req_ratio is less than 1.0 and the process requires all inputs (all_inputs_needed is True), then the process overlap with its dependencies cannot go prior to at least one day after the last dependency task and to end at least one day after the end date of its last dependency task.
- when not all inputs needed, the process can start before its last dependency task if prior dependencies are there, however it will be planned to end at least one day after the last dependency task end date.
review_days¶
The default number of days needed to review and approve the task before the downstream task can start, this affects the automatic project (or asset) tasks planning using Seneferu Project Planner or using the Project Interface menu that calls him.
Each created task also has a ‘reivew_days’ column, if the task review_days value is less than 0 (like -1 which means “Not Set”), the default value here in the processes control table will be used, otherwise the ‘review_days’ count value on the task itself (if 0.0 or greater) will override the value here.
indirects_needed¶
If True, indirect input assets will be returned and checked if needed when querying the process dependency tasks using nreal_brain.get_dep_tasks() which means: if a scene process has this flag ON, the asset dependencies which are not member of this scene but are members in the shots related to this scene will be returned. In other words, the brain dependency module will travers to throught the children of the children to see if they are needed rather than just stoping at the first layer of direct input dependencies.
Examples: When animLayoutScn (which is a scene level task) has this flag ON, so, all assets processes (like rig) which is planned indirectly to the scene through the children shots will be returned as dependencies to animLayoutScn.
WHen assemblyScn (which is also a scene level task) has this flag OFF, so, model planned in the scene indirectly via the individual shots assets will be ignored while only models planned directly assets to this scene will be returned.
always_affected¶
if ‘True’, any input processes for this process changes to ‘Approved’ affects its status.
This useful for processes like ‘assemblySh’ where any approved input task should activate it make it ‘Pending’, while any incomming task change should set it from ‘Approved’ to ‘Revisit’ so the user check if anything is affected.
assembly_member¶
if true, then this process will always be in the dependency list of any task even if a task with higher order exists.
i.e., in assemblySh, the ‘animation’, ‘clothSim’, ‘hairSim’.., and ‘assemblyScn’ are assembly_members, so all of them will be visible when query the deps of ‘assemblySh’ although assemblyScn regularly was not going to be there since ‘animation’ is there, while also ‘animation’ was not going to be there since ‘clothSim’ and ‘hairSim’ are there, setting them to assembly_members, force them to be there in the dependecny list for the ‘assemblySh’.
checkout_as_dep¶
Expression defines the file types that will be checked out when checking these files as dependency for a caller task.
example, checking out dependecy tasks of an animation task, needs the maya file of the rig task, so the checkout_as_dep file type for ‘rig’ should be set to ‘maya’.
accepted expressions syntax: typeA|typeB&typeC (typeA|typeB&typeC|typeD … etc) the expression above will first for (typeA or typeB) if typeA found, it will be returned, then check for typeC, otherwise it will check for typeB, then check for typeC but when ‘&’ is there in the expression means that at least one part of each ‘&’ element should be satisfied.
Example: ‘maya|katana&alembic’, means that either ‘maya’ or ‘katana’ file must be there, in addition, ‘alembic’ file must be there.
dep_checkout_method¶
Expression defines how the file will be checked out in an application when it’s checked out as a dependency task for a caller task.
Example: Checking out dependencies for an ‘animation’ task, requires referencing the ‘rig’ and ‘environments’ ..etc, so the dep_checkout_method for ‘rig’ should be set to ‘reference’
supported methods are:
import, reference, retarget_<process>, create_rib, create_ass, create_vdb, retarget_particle
in case of retarget_<process>, the target itself would have something like this:
reference<-retarget or import<-retarget this means reference the ‘checkout_type’ file, then replace the inernal ref by the sourceTarget comming from retarget_<process>
retarget_plate,layer (retargets both or any available (but matching) plate or layer process asset)
Example:
clothSetup reference<-retarget for_process: clothSim
animation retarget_rig for_process: clothSim, hairSim
clothSim reference for_process: all
Allowed expressions:
methodA|methodB&methodC
Note: The dep_checkout_method expression should match exactly the checkout_as_dep expression conditions.
Examples: checkout_as_dep = ‘maya|katan&alembic’, requires dep_checkout_method to be set to something like this:
‘import|reference&retarget_alembic’ which means import ‘maya or reference ‘katana’ and retarget the alembic (that was imported in maya or referenced in katana).
for_process¶
Expression defines the processes that the checkout_as_dep type and the dep_checkout_method will apply on them, by default, this should be ‘all’ unless a process needs a custom treatment, OR NEED TO BE IGNORED in the returned taskDeps (like ignoring ‘texture’ task which should not be returned as depTask for anything)
Example: When checking out ‘rig’ as dep for any upstream process, the type ‘maya’ should be ‘referenced’, while in case of clothSetup and hairSetup, the ‘rig’ process should be checked out as dep by ‘import’ type ‘alembic’.. To configure this:
The rig process should has the following expressions:
checkout_as_dep:maya|alembic
dep_checkout_method:reference|import
for_process:all|clothSetup,hairSetup
Notice that this column also should have the same expression syntax and operations as checkout_as_dep and dep_checkout_method, and it cannot be null when checkout_as_dep has an expression.
publish_types¶
The files types this process should generate and publish from the application publishing the task of this process.
This can be something “maya&alembic”, “maya&nrHair” or “katana&layersInfo” …etc
In addition, there are publishing commands/keywords like:
- autoCache: if used, the publishing application will automatically detect the cache types to be generated based on the cache tagged primitives, autoCache will detect alembic, usd, vdb, yeti and nrHair cache types and will generate any/all of them depending on the tagged primitives.
- reset_<process,process…etc>: resets the asset of the given process(s) to its original state and process task path, for example, reset_rig, will look for any rig asset under the being published asset and will reset it to the correct version and path during the publish, this is useful in cases when building setups that relies on retargeting the rig by animClip, then the reset_rig method during the clothSetup publish for instance will reset the rig back as a rig.
Note that such methods can be added to the system by hooking them up in a pre_publish_script in the processes control, or by putting them in the application pre/py publish scripts
treat_as_assembly¶
If True, this object will behave as if it has a loop connection that loops back and fed as an input for itself… this behavior is needed for assembly tasks where you can have unlimited number of assembly assets under an assembly asset, in case of asset_in_shot or asset_scene, you cannot for instance add shot as a child for a shot, but in assembly assets, something like this is needed and it’s mandatory to be supported.
Examples of such processes are “layout” and “assembly”, if we take the ‘layout’ as an example, you will find that the ‘layout’ process from all the children assembly assets are behaving as a dependencies for this current ‘layout’ process, the same idea applies for the ‘assembly’ process where all child ‘assembly processes are being treated as members to it.
affected_by_children¶
By default, children asset tasks get affected by their parent asset tasks, which means that the parent asset tasks have to be done first before the children asset tasks can be done, accordingly, when planning and scheduling the project/asset tasks, the tasks of the parent asset will be scheduled to start before the children tasks, For example, a design of parent assembly Set asset will affect the design/model of a child asset that is a member of this set, and that Set Design task will be scheduled to be done first before the children assets design tasks can start (Making the big picture, then the details comes next - where also the designer of the children needs to see where these designs are implemented in the parent bigger asset, so Seneferu production manager will automatically give him access to the parent design as dependency to his child asset design “or any similar case”).
However, in some cases, the opposite behavior is what you really want, for example, the assembly task of the parent Set asset cannot be finished before the model of the Set Props are done first, so the assembly task in this case (or similar cases) has to be scheduled to start after all the children models tasks are done, and the opposite to the previous case, now the artist making the assembly task, needs to see the children asset models coming as dependency to the parent assembly task, and here is where the ‘affected_by_children’ flag is used to allow this inverted behavior, so configuring the ‘assembly’ process to be ‘affected_by_children’ will achieve this.
Note: In cases when you don’t need neither of the two behavior, you can just ignore creating the parent tasks, then the children tasks will all be members to the parent, but they will have the same weight in relative to them each others and to the parent asset, for example, if the parent asset doesn’t have design task, then all children design tasks can run independently and they won’t require anything from the parent.
art_aprv_as_approved¶
‘Art. Approved’ act as ‘Approved’, setting this to True, then when setting the prcess status to ‘Art. Approved’, then automatically it will be set to ‘Approved’.
for_app¶
It defines custom rules for checkout types/methods for specific applications whenever an entry is available, applications are separated by ‘.’ for each application, there should be an equivalent rule expression for all of for_process, publish_types, dep_checkout_method and checkout_as_dep
for instance:
for_appcheckout_as_depdep_checkout_methodfor_process
Maya.Katanamaya.xmlreference.referenceall.all
This will reference the maya dep_file for ‘all’ parent processes if the current application is ‘Maya’, or
it will reference the ‘xml’ dep_file for ‘all’ parent processes if the current requesting application is ‘Katana’
Note: the application name here should come as it was written in the ‘Production Manager’ Applications list.
ignored_as_dep_for¶
If ‘all’, this process will never be returned as a dependency for any given process, although its status will keep affecting its upstream tasks/processes, typical values are ‘all’ or ‘<some>,<processes>,<names>’
Example: in texture task, ignored_as_dep_for can be set to ‘all’ or to ‘rig,lighting,lookdev’, so it will never be returned as dep for any of the specified comma separated processes, but ‘texture’ will keep marking ‘lighting’ dirty when ever a new texture version is approved.
always_dep_for¶
Accept a ‘process’ or comma separated process list.
This process will always be returned in the list of dependencies for the given processes list.
This is useful to let some process like design for instance to appear as a dependency for both of rawModel and model when the two tasks are available.
Note: This column is meant to affect the nreal_brain.get_dep_tasks() method only, while not affecting the tasks status nr_triggers, since changing the design for the example above should mark the rawModel task only as ‘revisit’, while the ‘model’ task should not be affected before ‘rawModel’ becomes approved.
Note: always_dep_for works even if the output of the process is not really connected to the input of the given processes, in such cases, the current process is returned as input for the given processes, while this current process is being treated as optional_for them (see optional_for entry for more details) Example: when ‘camAnim’ is always_dep_for ‘clothSim,hairSim…etc’, then the camAnim will be returned as dependency for the given sim processes, while the status of the ‘camAnim’ is not going to affect the sim tasks since the camAnim output is not really connected to these sim tasks (but the opposite is correct, the camAnim will appear as input for them and the actual configuration that these sim processes are inputs for the camAnim process where you need to see what you are animating as you want to know what is visible to know what should you simulate).
bypass¶
If the current process results are fed as an input for downstream processes but the results of it are not going to be included/embedded in these downstream processes, then you may want to mentioned these downstream processes as to be bypassed when this process is locking for deeper downstream task. The idea that the process will not appear as dependency for a downstream process if the downstream process already have a child dep that has the same process as input, so if this child has the process as input but doesn’t have its results embedded then the child has to be bypassed to reach the downstream process.
for example:
->hairCompoundSim
/ \
clothSim ------------------>assemblySh
by default, when querying the dependencies for assemblySh in the workflow example above, the clothSim will not appear in the returned dependencies because it’s occluded by the hairCompoundSim process which is supposed to embed the clothSim results its results as one output, but when we know that the results out of hairCompoundSim will not include/embed the clothSim results too, then we want to make sure that clothSim is configured to bypass the hairCompoundSim process when it’s trying finding its way through to the other processes (like assemblySh in this case).
so, by setting the clothSim bypass column to “hairCompoundSim”, then now when querying assemblySh dependencies, the clothSim will bypass the hairCompoundSim and will appear in the assemblySh deps in besides the hairCompoundSim which should appear by default because nothing is occluding it.
dont_affect¶
A list of processes comma separated, the status of all processes provided here will not get affected by the status of the current process even when automate_status is active.
optional_for¶
A list of comma separated processes (or ‘all’) that this process is considered optional for, when the process is optional for other processes, it will appear in their dependencies through Seneferu Production Manager and it will be available for loading, but it will not push these processes forward when scheduling the tasks, will not deny the working on these processes, and it will not set these processes to “Revisit” when they are already “Approved”.
so, when planning an asset or the whole production, the process will not push forward any process that this process is optional for.
For example: camAnim is configured to be optional_for crowdSim, so crowdSim TD will get the camera animation as dependency for his simulation task, however, when planning and scheduling the crowdSim, the due date of the camAnim will not affect when the crowdSim task can start.
requires (requires at least)¶
Defines the minimum requirements before a task of this process can be ready to start, if nothing provided, then all dependencies are assumed to be required before this process can be ready to start, if required process are defined (comma separated), then if these minimum requirements are satisfied, Seneferu will assume that this process is now ready to start and will do the needful accordingly even if some dependencies (from different processes) are not yet satisfied.
for example: if the ‘rig’ process is configured to ‘require at least’ modelChr process, then a rigging task will be set by Seneferu to ready when all the asset modelChr tasks are approved even if the ‘garment’ is not yet finished and approved.
tech_process¶
Tech process doesn’t go for ‘Art Review’ and ‘Art. Approved’ but it bypass the artDept and goes directly to the tech dept as ‘Review’ and then ‘Approved’.
pre_py_trigger¶
It’s an additional python trigger runs inside the bs_status triggers on approval events, and it has all the main trigger args available in addition to some internally initiated parameters inside bs_status.
pre_py_trigger runs as immediately as a task gets approved and before the any brain trigger events on downstream tasks, this is generally useful in cases when approving a task need to do custom processes before the main trigger to start altering the downstream task, like in case when a approving ‘lighting’ task have to create downstream ‘render’ tasks for each render layer configured in the currently being approved lighting recipe.
you can even use the pre_py_trigger to bypass the ‘approval’ trigger for a specific process by just setting it to ‘return’
Because the pre_py_trigger works as a hooked up inside the nreal_brain processes triggering module, this give you access on all the brain and the trigger module variables as follows:
- nr: an initialized instance of the nreal_brain from where you can access NReal API. - server: in case you are using the NReal Project Managements system with with Tactic, 'server' will be an instance from tactic server stup from where you can access Tactic API. - editXml: edit_xml_utils module imported as editXml to be used for updating shots withing the server triggers - search_key: the search_key of the current task got its status updated. - current_task: a dictionary of the information about the task got its status updated. - prev_data: a dictionary has the old values of all the task columns that were updated when this trigger is triggered. - update_status: the new status the task has been set to. - prev_status: the previous status before the task status got updated. - project: the project code. - finished_statuses = ['Art Review', 'Art. Apporved', 'Review', 'Approved'] - asset: the sobject dictionary of the asset owning the the task being processed. - ast_sk: The asset search_key - user: The login user id who caused this trigger to be triggered. - override_main_trigger: defaults to False, if enabled, it will disable the main statuses automation trigger and rely on yours. - respect_subcontext: Boolean Enabled by default, this tells the brain to respect the tasks subcontexts when processing the up/down stream tasks statuses, for example the 'model' task will affect the model/body status, but the model/accessories status will not affect the 'model' status, this help you setup the relation when the accessories should be updated to match the body while the body should not be updated when the accessories is updated. Turning this flag off will free the statuses processing then the flow will go into both the two directions where each will affect the other. - current_ctrl_obj: The processes_ctrl sobject of the process of the task got its status updated and triggered this trigger.Also, the following methods are available withing the pre_py_trigger:
- get_planned_sobjects(process, sk=None, currStype=None) method: you can use it to query the downstream dependency tasks to do additional processing on them before the main trigger to processes them, or you can even completely override the main trigger.The following modules are also available:
datetime from datetime sys os nr_status nr_triggers json nr_utils
post_py_trigger¶
Runs once the main task status trigger finishes processing. Withing this post_py_trigger hookup, you have access on the same variables, methods and modules from the pre_py_trigger above, in addition to any other variables, modules or methods you have defined in the pre_py_trigger.
use_nr_alembic¶
If True, all alembic dependencies for this process will be loaded as nrAlembic(s) import, otherwise the default behavior takes place.
lod (Level of details)¶
Defines the level of detail on how a usd (or alembic) is going to be loaded when loaded as dependency for a specific process. For example: rig process has ‘lod’ set to full, which means that any asset coming from any process as dependency for the ‘rig’ process will be loaded ‘full’ (for example in the pxrUsdReferenceAssembly node in maya or the equivalent in houdini or any other application).
Valid values: Unloaded, Cards, Collapsed, Expanded, Playback, Full
These specifies how a pxrUsdReferenceAssembly node will be configured in cases with dependencies that uses the ‘Assembly’ method from ‘usd_dep_method’ below to load the USDs.
usd_dep_method¶
Specifies how USD dependencies of this process will be loaded, valid values are:
Assembly: using pxrUsdReferenceAssembly in maya (or equivalent in other applications)
UsdNode: like nrAlembic node, is a usdNode that has output connections to all of it’s objects for dynamic updating assets.
Reference: Normal reference (like maya reference)
allow_ver_in_publish¶
If True, then publishing a task that has all or some of it’s dependecies versioned will be permited but a warning will be raised everytime you publish oldasset versions.
use_versioned_deps¶
If True, then all calls or querying of ‘current’ task/process dependencies will return the versioned dependency files rather than the versionless,
by default, ‘current’ is versionless and will return the versionless file that gets updated whenever a new version of the dep asset gets approved.
supported_on_farm¶
If True, the publisher will ask the user if the task should be submitted to the farm for “Gen. Preview” or “Processing” or “Simming”, also, being True, will let the publisher know that these are heavy assets, so it will ask the user if he has the files to be published instead of generating them from scratch during the publish process.
dynamic¶
If True, the caching/baking operations for this process will be generated for the full timeline range rather than static frame.
NOTE:
Any dynamic or simulation process requires a frame range, Seneferu Production Manager will get the frame range from
the current active version (version 0) of the most recent edit process xml (Seneferu Production Manager will do this
automatically by parsing the edit xml and finds out the shot frame range used in the edit)
requires_preroll¶
When enabled, the process will do preroll (or will generate a pre-roll cache) before running the actual files types generation. for example: maya hairSim process can have requires_preroll Checked, which will generate nHair cache before writing out the hair curves to alembic or render procedural.
approves_proc¶
A comma separated processes list of processes that this process approves when it gets approved For example: set the ‘render’ process approves_proc to ‘lighting’ then the lighting process will automatically get approved when all of its children ‘render’ tasks are approved and that in respect to the subcontext (note: render tasks with status==None will be ignored when querying the approved ‘render’ tasks causing lighting to be approved if some ‘render’ are approved and others are not having status - the system uses this feature to disable a render task automatically when the user publish a lighting task that has this render layer disabled). Another example of this is the ‘comp’ process which is auto approved by ‘compRender’ process.
Notes:
- Tasks approved by their children like 'lighting' and 'comp' cannot
be approved by supervisors, and the 'Approved' option will be hidden
in Production Manager supervision mode for these tasks.
- The auto approved tasks like 'lighting' and 'comp' will not re-trigger
their children "render" processes who made the approval to "Revisit".
- Using the "share_notes_with" feature, the notes for these processes
should be shared, so, adding a note on 'render' task will share the
same note with the lighting task that created this 'render' task,
while adding a note on lighting task, should not go on render task
because the render tasks is just a child process to the lighting task
and it's not owned by anybody who can modify the lighting reciep, instead
these render tasks are auto assigned to renderWrangler user who receive
notes like requests to restart a render layer rendering job or retry
a corrupted frame... etc.
disallow_approval¶
Disables the process direct approval in Production Manager, this is used for processes like ‘lighting’ or ‘comp’ which should not be directly approved, but should be approved by approving its children ‘render’ processes.
vendor_can_approve¶
Enables the users who are members in the ‘remote’ group to approve tasks of this process. By default, the users belonging to the ‘remote’ group like vendors and sister companies and cannot set a task status to “Approved” even when they are supervisors on this task, the final Approval privileges is guaranteed only to the mother company/studio unless this flag is set for the process, then a supervisor in ‘remote’ group can then set a task to final “Approved”.
latest_is_approved¶
When checked, any new latest version is published to this process will automatically become the current floor version as well as long as it was never been approved before, this way, new versions of the asset are immediately available to the rest of the team in the downstream processes without the supervisors permissions, but when a specific version gets approved at any time, then this version remains the current version forever until a new later version is approved.
multi_child_proc¶
format is ‘source_process’->’target_process’, example: rig->animation this expression tells the system that this process will be an synced with multiple children processes the way that when change the status of it, the status of the children should be affected, and when you create asset locator for it, a children asset locators should be created, and when publishing it, then the children assets will be published. This type of processes is needed when a several tasks related to each others need to be assigned to the same user and published all together in one pass. Example: The ‘animationSh’ process which has multi_child: rig->animation, where a user is assigned to animated several characters in one pass, in this case, the ‘animationSh’ assigned to him will load the ‘rig’ for the children characters, then when creating asset locator of these loaded rigs, then the application will create ‘animation’ asset locator for each loaded and selected character ‘rig’ asset, then when the user publish the ‘animationSh’ asset, the the publisher will publish all the children ‘animation’ assets before it publishes the ‘animationSh’
pub_texture¶
If True, the textures mapped to the objects’ shaders under the asset locator will be published as long as the assigned shaders to the objects are supported
use_namespace¶
This flag makes effect only in Maya, if True, dependencies for this process will be loaded into a namespace (numbered asset_name namespace, ex: kaza1:kaza_model_ast).
publish_actions¶
The publish actions has the following syntax:
<q?>/act-status=typ1&typ2&typ3|act2-status=typ1&typ2&typ3|actN...
where:
<Task Control Question that should be asked to the user?>/<the button
text for an answer>-<the status the task should be set to it if the user took
this action>=<the_asset_types this actions apply on seperated by '&'s>
NOTE:
You should not use '.' periods in the action phrase except when it's meant to
setup a per application actions, although you can use one action for all apps
without providing a period separation like the rest of the other processes
ctrl elements.
If the given action declared a publish types to button action, then these types will be ignored by the internal publisher assuming that the action script will generate them (or process them on the farm) sine you already configured the action to equal these publish types (multiple types can be given separated by &s as above).
Note:
The publisher will also respect separating the actions by '.' period
so you can configure a different process action for each different
application.
Optionally, you can configure the applications.conf to link a script to be called when this action is taken (the user click on the action button)… the system then will look for a translation to this button in application.conf, if one found, the system will call it, this actually can be your custom script to generate some publish files types that you have ignored in the action syntax.
Example Process Control action on clothSim process:
This process is supported on the farm, what would you like to do?/Just
Publish|Simulate on the farm-Processing=alembic
The action above will as the user during the publish (and before generating the publish files the question above, the user then will see two buttons, “Just Publish” and “Simulate on the farm”, if the user clicked on “Simulate on the farm” button, the task status will be set to “Processing” and the publisher will ignore generating the “alembic” file generation configured in the ‘publishe_types’ column, then you have to make sure to configure the applications.conf to support your action, continuing in our example, the applications.conf can have an entry like this (for maya application for instance):
actions: [{"Simulate on the farm": "sim_module.write_cache"}]
Now, and since there is an action translation to the user button, the system will run given python method as:
import sim_module
sim_module.write_cache()
The system will pass to your module all the information you need to do the job as args, so your write_cache() method should be defined with **kwargs, or should at least define all the following args that will be passed to any action module.method():
- nr=self.my (an instance of NReal tactic api connected to the server already)
- ast_sk=self.ast_sk (the asset search key)
- context=self.context (the task context)
- asset_name=self.currentAstName (the asset name)
- category=self.category (the asset category)
- snapshot_code=<code> (the code of snapshot where the files will be published)
- task_code=<code> (the code of the task being published)
- files=<list_of_files> (the files that was generated internally by the publisher)
- files_types=<list_of_types> (the types equivalent to each given file in files)
Note:
You can just use the Processes Control actions without a translation
scripts in the applications.conf, in cases if you want to just manage
the status the task will be set to based on the action button the user
clicked, and then, you can may have a pre_py_trigger or post_py_trigger
who makes something based on these tasks status.
Another example on lighting process actions:
--------------------------------------------
::
This process supports some additional actions, what would you like
to do?/Just Publish|Submit Preview Render-Gen. Preview|Submit to
Render-Processing
Based on the action above, when the user publish a lighting task, the
Task Control will ask the user "what would you like to do?" then will give
the user the given three answers to choose from, where each one of them
is a button the user use for answering, depending the user answer the status
is being grepped and the Task Control is going to make action based on
this answer, for instance, if the user answered the above question by
"Submit to Render", then the Task Control will set the published task
status to 'Processing'.
At this point, nothing happens more than the status change, then there
is a pre_py_trigger that submits the published recipe to the farm whenever
it sees the task status became "Processing".
See the following Task Control & applications.conf chapter for more
information on how the action works.
description¶
Workflow hints on the process, this will be used whenever a task doesn’t have a brief/description when created. This is very useful specially for freelancers and vendors who are joining the project and need to quickly engage and work properly without having to read a long documents and learning a lot of tools.