Production Manager & Task Control¶
The supported applications in Production Manager and Task control are using the config file applications.conf which defines the supported applications and how the correlations between Production Manager and the applications Task Control are handled:
The applications.conf looks like this:
{
"Maya":
{
"port": 7022,
"main_type": ["maya"],
"main_ext": ["ma", "mb"],
"open": "assets_io.open_file",
"reference": "assets_io.reference_asset",
"import": "assets_io.import_asset",
"retarget": "assets_io.retarget_asset",
"actions": [ {"Submit Final Render": "nr_utils.jobSpooler", "status": "Processing"},
{"Simulate on the farm": "my_module.my_action_function", "status": ""}
]
},
"Katana" :
{
"port": 7000,
"main_type": ["katana"],
"main_ext": ["katana"],
"open": "katana_io.open_recipe",
"reference": "katana_io.load_asset",
"import": "katana_io.load_asset",
"retarget": "katana_io.retarget_asset",
"execute": "katana_io.load_asset",
"actions": [ {"Submit Final Render": "nr_utils.jobSpooler", "status": "Processing"},
{"Simulate on the farm": "my_module.my_action_function", "status": ""}
]
}
}
The configuration above will add two applications to the Task Manger applications list (Maya & Katana), and these applications will be configured according to the entries above, the way that if for example the Production Manager is asked to load dependencies of a given task into the application “Katana” using the ‘reference’ method (or whatever is configured in the processes_ctrl for this process), then the task manager will use katana_io.load_asset function to load the the assets based on how you configured the ‘reference’ entry above, so, what the Production Manager exactly does, that it will send “import katana_io;katana_io.load_asset(…)” function to the application (katana) task control listening server, then the application Task Control by its turn will execute this received code using bs_exec() as a hooked up code that have access to all the Task Control ‘self’ object.
When the Production Manager send the command to the Task Control, it fills it with the following arguments about the task to be loaded:
def katana_io.load_asset(self, depFile_data, method, for_asset, for_context, ignore_dup):
'''
:param depFile_data: is a dictionary has the following information about the being loaded dependency asset:
{'asset_name': asset_name,'dep_context': task['context'],
'dep_process': task['process'], 'dep_file': [sn_file],
'dep_type': target_type, 'checkout_method': method,
'dep_version': dep_version, 'dep_ast_sk': ast_sk,
'dep_category': category}
:param method: the method as configured for this dep_process in processes_ctrl table,
this can be something like 'reference', 'import', 'load', 'execute'... etc.
:param for_asset: is the nice name of the asset owning the 'current_task'
the dependencies are being loaded for.
:param for_context: is the context (process/subcontext) of the task
the dependencies are being loaded for.
:param ignore_dup: Boolean you can use to decide if the asset should
be loaded if it's already exist in the scene or not (True means ignore
duplicates and just load the asset even if another instance is in the scene).
:return: the loading success status.
'''
Note that the function above is having a ‘self’ as first argument, the ‘self’ in this situation is the application Task Control class which gives you access to a lot variables and objects like an instance of the nreal_brain API for example.
According to above, if you are writing your own asset io module, you must define your functions the way they have at least all the above params as follows:
my_load_func(self, depFile_data, method, for_asset, for_context, ignore_dup):
. Then you are free to use or not any of them, but they all must be there.
The “actions” entry above in the applications.conf is a list of dict that defines the *{reaction_name, reaction_command, task_status_for_this_reaction}, *when the user is asked to take an action according to the processes_ctrl questions in the actions column, then the Task control will run the given reaction_command after publishing the task files based on the user choice from the provided reactions.
Note: All the applications are using the function above as is, except Maya (only for now) where the Production Manager is calling the asset io module by passing different arguments to it as follows:
assuming you configured the applications.conf as above where maya {‘reference’: ‘assets_io.reference_asset’}, then the task manager will send the function call to the maya Task Control by passing function as:
def assets_io. assets_io.reference_asset(dep_file, asset_name=None, category=None, context=None,
version=None, sk=None, loadReferenceDepth=None,
create_locator=None, ignore_dup=True):
'''
:param dep_file: is the dependency file to be loaded as were decided
by the task manager based on your processes_ctrl entries/expressions
:param asset_name: is the asset name of the being loaded asset.
:param category: is the asset category/library type (props, environments... etc)
:param context: is the task context from where this file snapshot is coming from
(ie, model/body or 'model'... etc).
:param version: the version of the file being loaded (even if it's the
versionless file, the version gives the actual version of the asset).
:param sk: is the database search_key of the asset that you can use
for example to get the asset sobject from the database.
:param loadReferenceDepth: needed for maya referencing, it's there but
you can ignore it.
:param create_locator: decides if the loaded asset should have a
locator created for it (or it comes with its own locator).
By default, maya types will have this passed as False
while alembic and usd for example this will be True, when
you your write your own method can ignore this and expect
your own.
:param ignore_dup: Boolean you can use to decide if the asset should
be loaded if it's already exist in the scene or not (True means ignore
duplicates and just load the asset even if another instance is in the scene).
:return: the loading success status.
'''
. Notes:
1. if the io method starts with the word 'retarget_' then one more arg
will be sent by Production Manager to the Task Control when calling the method,
this is 'target_process'.
2. The param description above are the defaults based on our assets\_io
module behavior, if you are writing your own method, you can use these
params as we did or you can inspect your own, but as mentioned you have
to have params declared because it will be passed by the task manager
anyway.
3. You can still use functions from the factory assets io module of the
current application by import the io module inside your own module and
call any of the built in functions as came in the factory io module
documentation of the specific software.
4. Your IO module should always define a function called
retarget_asset() even if it will not be used, the task manager have to
send it to the task control after filling it's argument, as follows::
def retarget_asset(self, current_process, srcRetargetFiles, dst_refNode_patterns):
'''
Used to retarget asset assets based on the processes\_ctrl retargeting
methods explained above in the processes control documentation.
:return: ...
'''
or
def retarget_asset(self, **kwargs):
pass
5. the environment variable <Applicaton>_Port will override the where both
the task manager and the application task control is going to use when
initialized from terminal... for example, someone can run two instances
of Katana where each of them is connected to different task managers
and different processes, by::
$ export Katana_Port=<port_number>
$ task_manager & katana
notice that the application name in the envvars should appear exactly
as how it's showing in the Production Manager applications drop list, and this
is of course case sensitive.
Tasks Display:¶
The tasks displayed on Production Manager are truncated to the configured max count in the workflow.conf - MAX_TASKS_DISPLAY entry. This is needed for performance and interactivity specially when the user/project tasks grows to thousands of tasks. Anyway, truncating the list is happening in smart context the way that you will always see that tasks that you have to see, and lower possible needed tasks will be in the bottom of the list and these will be truncated first. So, if the max count is 25 (the default), then all the first 25 tasks are the higher priority tasks, the earlier to be delivered tasks then the low priority tasks and non-ready to start tasks come last. Notice that when the tasks are greater than 25, then no one would be able to search in them manually, but you will always have to use the filters to get what you want.
Notes:
- The truncated list doesn’t mean that the tasks are not there in the list anymore, they are cached in the background but not displayed for better viewing and better performance… You can still have access to the tasks not the list using the search filters.
- The dependencies tasks list are always full list and not being truncated even if their count exceeds the maximum allowed tasks display.
Tasks Filtering:¶
Asset Name - Task Context - Category
The three search filters in the top of task manager filters the tasks in smart context the way that tasks starts with the given filter will fetched first, then tasks containing the filter will be fetched next… this make it easy for the user to get tasks when he doesn’t remember the exact name he is looking for specially when the tasks list is truncated to the max configured tasks count.
For example: typing ‘bla’ in the asset name field, will return first all assets starts by ‘bla’, then will return any asset contains the letters ‘bla’, although the task manager will re-arrange the tasks based on the process order in the workflow and the tasks priorities, but the asset you are looking for will always be there, even when the tasks list is truncated then the tasks containing the filter will be filtered out by the truncate function while the tasks starts by your filter will remain in the list (you will notice this behavior when the count of tasks containing the filter but not starting with it are greater than the max-allowed tasks count in the task manager as configured in workflow.conf).