Nothing Real Brain

class nreal_brain.Initialize(server=None, ip=None, project=None, user=None, passwd=None, ticket=None)
add_task_icon(snapshot_code, src_path, file_type='icon', mode='upload', create_icon=True)

deprecated

add_to_snapshot(snapshot_code, context=None, description='', paths=None, paths_types=None, mode='move', checkin_type='auto', asset_name=None, rename_dir_files=False, create_contents_file=False, create_icon=False, use_queue=False, task=None, latest_is_current=True, overwrite=False, post_checkin_status=None)

Append to the given snapshot.

Parameters:
  • snapshot_code – snapshot code.
  • context – snapshot context
  • description – description of the snapshot
  • paths

    can be one file, multiple files, directory or a mix of files and folders.

    NOTE: In mixed mode (files and a folders), the folder path shouldn’t have

    ’/’ at the end, and multiple folders publish requires different

    extension for each directory.

  • paths_types – snapshot types list in respect to the given paths list (same order is a must).
  • mode – can be ‘move’, ‘copy’, ‘upload’, or ‘in place’.
  • checkin_type – ‘auto’ will rename the published files based on the configured naming convention, while ‘strict’ will use tactic internals.
  • asset_name – uses queue for the files injection instead of using the local copy methods.
  • rename_dir_files – if True, the files inside the directory will be renamed based on the asset name and context.
  • create_contents_file – creates a .contents file inside the folders when checking in a folder.
  • create_icon – either create an icon or not.
  • latest_is_current – if True, then if the task was never approved before, then the latest published version will be set to current
  • use_queue – decide to either use queue as a checkin queue or not (True: uses the config queue, otherwise, a queue system can be provided as Tractor|OpenCue|Deadline)
Returns:

Status

analyze_performance(context, project, login_search, start_date, end_date, login, USERS, user_title, users_joined, stupid_version=False, is_group=False, printable=False, doIt=True, status_pipeline_code='', status_colors='', calc_best_performing=False, show_project_status=False, enable_all_features=False)

stupid_version: changes the “Assignment” to “Pending” and the “Pending” to “Ready to Start” and “Waiting” to “On hold” …etc

auto_assign_tasks(tasks, users=[], by_users_load=True, by_users_performance=False, by_tasks_difficulty=False)

Auto assign the given tasks to most suitable team members based on tasks difficulties, users level of experience, asset importance, user performance and user availability/work load

Parameters:
  • tasks
  • users
  • by_users_load
  • by_users_performance
  • by_tasks_difficulty
Returns:

bs_sync(files)

Check if files are already exist at local storage, if not, add them to the list of files to be downloaded.

Parameters:files – [/local/in/relation/to/tactic/files/paths] queried from Tactic, they come as local path to tactic server.
Returns:updated files list
cache_stypes(stype)
calc_farm_priority(task)

Calculates a priority number of the given task based on the asset importance and bid due date, that can be used to prioritize tasks processing on the farm.

Parameters:task
Returns:
calc_performance(project, login, login_search, users_joined, calc_best=True)
checkin(sk, context, description='', paths=None, paths_types=None, mode='move', checkin_type='auto', rename_dir_files=False, asset_name=None, create_contents_file=False, is_current=False, create_icon=False, use_queue=False, overwrite=False, is_revision=False, post_checkin_status=None)

Checkin the given files paths to sk context.

Parameters:
  • sk – tactic db asset __search_key__ value.
  • paths

    can be one file, multiple files, directory or a mix of files and ONLY one folder.

    NOTE: In mixed mode (files and a folder), the folder path shouldn’t have ‘/’ at the end.

  • paths_types – snapshot types list in respect to the given paths list (same order is a must).
  • checkin_type – ‘auto’ will rename the published files based on the configured naming convention, while ‘strict’ will use tactic internals.
  • asset_name – is the nice asset name (not the asset search key name).
  • create_contents_file – if True, then publishing directories will have a .contents file that a list of files in the directory.
  • rename_dir_files – if True, then the files inside the being published directory will be renamed before publishing the directory.
  • mode – can be ‘move’, ‘copy’, ‘uplaod’, or ‘in place’.
  • asset_name – the asset nice name.
  • create_contents_file – creates a .contents file of dir contents in case of checking in a dir.
  • is_current – set the snapshot of the checkin to ‘current’
  • create_icon – – due to bug in tactic – this may cause the whole checking process to fail when trying to checkin non-existing icon
  • use_queue – if a queue system given or True, it overrides the copying mode above, and then copying the files using a queue job instead

(values like Tractor|OpenCue|Deadline can be used to override the configured queue system) :param overwrite: enables overwriting already existing files in the version snapshot tree :param post_checkin_status: if provided, the task will be set to this status when the checkin (and files transfer in case of remote publishies) is complete :return: Status

checkin_icon(sk, context, icon_path, use_queue=False)

Checkin a task icon.

Parameters:
  • sk – this is the asset search_key not the task search key.
  • context – task context.
  • icon_path
  • use_queue – if True, the checkin will go using ‘upload’ mode.
Returns:

checkin_preallocate(sk, context, description='', is_current=False)

Allocate and create an empty snapshot in Tactic, this is efficient in heavy files/simulation checkins.

checkin_sequence(search_key, context, file_path, file_range, snapshot_type='sequence', description='', file_type='main', metadata={}, mode='move', is_revision=False, info={})

Checkin a sequence of files, we implemented this feature, although tactic itself doesn’t handle this in the expected way, but it treats the files as <filepath>.####.<ext>, accordingly, when changing the current pipeline version and get the versionless the versionless file is actually

pointing to nothing but to this hashed path.
Parameters:
  • search_key – database search_key
  • context – task context
  • file_path – file path
  • file_range – frame range
  • snapshot_type – by default, it’s ‘sequence’ to mark this snapshot as not a normal one
  • description
  • file_type – anything of your choice to name a type needed when quering the files from the database.
  • metadata – metadata that you might need to pass with the sequence, then you can use them to write these out into an exr in nuke for example.
  • mode – the file transfer mode (can be ‘copy’, ‘upload’, or the default ‘move’)
  • is_revision – was never been implemented in NR API
  • info
Returns:

checkin_using_queue(sk, context, files_paths, description='', is_current=False, mode='', create_icon=False, snapshot_code=None, task=None, queue=None, overwrite=False, auto_status=True, post_checkin_status=None)

Checkin the given files_paths using a queue.

Parameters:
  • sk – is the asset search key
  • context
  • files_paths
  • description
  • is_current
  • mode – deprecated, the queue method takes place over the upload/move/copy modes.
  • create_icon
  • snapshot_code – if snapshot code is provided, a new one will be allocated for the future injection.
  • auto_status – sets the task status to ‘Waiting’ till the file transfer complete.
  • post_checkin_status – if provided, the task status will be set to this status after the file(s) transfer is complete.
  • task – optionally provide task object the files are being checked in for, if not provided, it will be queried internally.
  • queue – if None, the queue system will come from the pipeline config of the project QUEUE->SYSTEM
  • local_to_cloud
Returns:

checkout(sk, context, version=-1, file_type='main', dir='', level_key=None, to_sandbox_dir=True, mode='copy', use_queue=False)

Copies the files from the main project tree to the user sandbox working directory.

Parameters:
  • sk – the asset database search_key
  • context – the task context (process/subcontext) like model/body (or model)
  • version – the version to be checked out (0 is current, -1 is the latest, or any other version number).
  • file_type – the snapshot type you used when checked in the files.
  • dir – unused - kept for the future to give a functionality to retarget the files destination instead of forcing them to go to sandbox
  • level_key
  • to_sandbox_dir
  • mode – keep it as a ‘copy’.
  • use_queue – decide to either use queue for the checkout and dependencies sync or not (True: uses the config queue, otherwise, a queue system can be provided as Tractor|OpenCue|Deadline)
Returns:

list of checked out files paths

checkout_deps_using_queue(depFiles_info, task, queue=None)

Synchronize all given dependency files list on the remote storage side from the main Tactic storage.

Parameters:
  • depFiles_info – a list dict depFile_info that gets constructed using the get_dep_files() method
  • task – the main task to checkout the deps for
  • queue
Returns:

‘in_queue’ if files are being synced, ‘files_ready’ in case nothing needs to be synced, or False in errors

checkout_on_client_side(snapshot, file_type=None)

checkout file_type from a snapshot on the client side (running as regular copy on the local calling server instead of Tactic server) this is needed to checkout tasks on client (remote) side once the files were already synced or available at the remote side storage.

Parameters:
  • snapshot – snapshot sobject.
  • file_type – the snapshot type.
Returns:

the copied (checked out) file.

checkout_using_queue(files_paths, task, queue=None)

Queueing files upload/download in the configured nr_queue to their original place (assume this to be a synchronization process but through a queue), internally it uses bs_sync.

Parameters:
  • files_paths
  • task – The task being checked out or the task the files are being checked out for
  • queue – the queue client (if None, the config[‘QUEUE’][‘SYSTEM’] is used
Returns:

‘files_read’, ‘in_queue’ or False on errors

clean_output_tasks(task, triggers=False, run_on_server=False)

When a revisit/dirty task is cleaned up, downstream tasks (specially waiting tasks) should be reverted back to their original states

Parameters:
  • task
  • triggers
  • recurse_on_approval – allows a simplified trigger to check if the reverting of the task to Approved should affect any downstream tasks (in most cases, waiting tasks, should be reverted to ready, if the reverted task is reverted to approved). Note: if triggers enabled, then this flag has no effect since the triggers will the same job but in more depth
  • run_on_server
Returns:

create_dir_contents_file(root_dir)

create a directory contents json file inside the given root_dir.

Parameters:root_dir – dir of interest.
Returns:the .contents file path
current_ticket()
delete_invalid_tasks(tasks)

Deleting assets sometimes leave tasks behind, these invalid tasks that doesn’t belong to an asset are giving misleading information about the project and the count of tasks belonging to users and departments, so they have to be deleted… this method will safely delete the tasks while keeping the any files were published into them

WARNING:

You have to run this method only from the server owner or a user account, otherwise all tasks are not accessible and considered invalid!

Parameters:tasks – tasks to be validated and cleaned up
Returns:the final tasks list after deleting any invalid tasks.
file_need_update(sk, context, version=-1, file_type='main', level_key=None, to_sandbox_dir=True, mode='lib')

deprecated

gen_dept_others_statement(login)

returns a tacitc expression syntax for all group members except the the login and to be used for database queries, i.e, [‘assigned’, ‘<user1>’][‘or’][‘assigned’, ‘<user2>] …etc, where user1, user2 are members of the given group

gen_group_assign_statement(group, exclude=None)

returns a tacitc expression syntax to be used for database queries, i.e, [‘assigned’, ‘<user1>’][‘or’][‘assigned’, ‘<user2>] …etc, where user1, user2 are members of the given group

get_all_ast_file_versions(ast_sk, ast_context, ast_types, ast_version)

Return the file paths for all task versions (‘versioned’ (the requested versioned), ‘versionless’, ‘versioned_dst_path’)

Parameters:
  • ast_sk – tactic db asset search_key value
  • ast_context – the context to be queried (ie, model/body)
  • ast_type
  • ast_version
Returns:

dict {‘versioned’: versioned_file, ‘versionless’: versionless_file, ‘versioned_dst_path’: versioned_dst_path}

get_all_naming(sk, context, for_version='Future')

Gets all file naming information regarding to the given asset/context when it gets published to the next future version

Parameters:
  • sk – the asset sk
  • context – the task context
  • for_version – accepts (int) number if the requested naming is meant for exact version number, otherwise the naming

information will be built for the future publish version :return: dictionary of the base dir naming convention where this asset processes will be publish to:

{'asset_dir_name': asset_dir_name,

‘context_dir_name’: context_dir_name, ‘full_path’: full_versioned_path, ‘versionless_full_path’: versionless_basename, ‘versionless_basedir’: versionless_basedir, ‘future_version’: str(future_version)}

get_allowed_statuses(user, ast_sk=None, context=None, proc_ctrl=None, approvals_only=True)

Gets the statuses the user is allowed to set on a task

Parameters:
  • user – user name
  • process – the process to get allowed statuses for
  • proc_ctrl – option processes control dictionary entry
  • approvals_only – only approval statuses are returned (ie, “Approved”, “Art. Approved”)
Returns:

list of status or empty list

get_asset_from_snapshot(snapshot)

deprecated

get_asset_info_from_asset_file(asset_file)

Get the asset information from the given asset file, Note that this method is currently configured to work ONLY on bs/asset, bs/scene and bs/shot and bs/script search_types

Parameters:asset_file
Returns:None or {‘asset_name/ast’: asset_name, ‘context’: context, ‘version’: version, ‘ast_sk’: ast_sk, ‘category’: category, ‘versionless’: <versionless_state_of_given_file>}
get_category(sk)
Parameters:sk – sobject search key
Returns:Unknown, or the asset category based on the the bs/<search_type> queried from the given asset sk.
get_connected_objects(sk, input_process, currStype, inProcStype='', treat_as_assembly=False, affected_by_children=True, get_indirect_processes=False, asset_category=None)

Returns the SOBJECT(s) of the external stype containing the given process, like returning the shots SOBJECTs that contains ‘assemblySh’ process for the current ‘bs/asset’ stype that has the current asset as a member on them (the shots).

Parameters:
  • sk – related process search key (sk of the process we query the inputs for).
  • currStype – the stype of the process that has the task status updated.
  • process – the internal/external process that is connected to the one that has updated.
  • treat_as_assembly – if True, then the input process from the current stype will be treated as if it is an assembly process, so all its children will be returned with the final result.
  • get_indirect_processes – if True, indirect planned assets will be returned (like returning an asset level process for a scene level process when the asset is planned

for a shot in the scene but not to the scene directly). :return: SOBJECTS that has the above process.

get_current_snapshot(sk, context)
Parameters:
  • sk – asset search_key on the database
  • context – the task context to be grepped
Returns:

the snapshot of the current version of the given asset/task context or None

get_dailies_files(tasks, versions, belonging_assets, type='media', in_date='TODAY', versionless=False, use_queue=False)
Parameters:
  • tasks – the tasks owning the files to be returned.
  • versions – the versions of the given tasks respectively.
  • belonging_assets – the parent sobjects for the given tasks respectively.
  • type
  • in_date – can come in several formats (like tactic TODAY|<exact_date>|<date>:|<date>:<date>|:<date>)
  • versionless
  • use_queue – currently no plan to support this.. this is needed to support reviewing dailies from a remote server.
Returns:

the task files were published to dailies on the given date or withing the specified after/before dates.

get_dailies_tasks(user, project=None, supervisor=False, in_date='TODAY', use_queue=False)
Parameters:
  • user – is the assigned supervisor in case ‘supervisor’ is not a head supervisor (like vfx/cg, director or ‘producers’)
  • project – optionally give a prject code, otherwise the current project will be used for the querying purposes.
  • in_date – gets dailies published in_date (supports ‘date’, ‘from:to’, ‘:to’, ‘from:’)

returns the tasks of dailies that were published after_date -to- before_date

get_dep_file(task=None, tsk_sk=None, ast_sk=None, context=None, version=-1, versionless=True, host_application=None, as_dep_for=None, prefer_usd=False, for_ast_sk=None, return_empty_deps=False)

Returns dict of the file info if will be checked out as a dependency in the form: {‘asset_name’: astName, ‘dep_process’:dep_process, ‘dep_file’: dep_file, ‘checkout_method’: method}

Parameters:
  • task – is the task containing the file snapshot (the ‘search_code’ & ‘context’ needed to query the file)
  • as_dep_for – the main process the dep file is being fetched for, this is needed to customize the returned file type per process, like returning the alembic file of the rig instead of the maya file when getting deps for clothSetup or hairSetup.
  • host_application – decides which expression from proc_ctrl to be used when querying file types, if no application provided, the first expression takes place.
  • prefer_usd – if True, usds will be preferred over alembics as whenever a usd version of the file is available.
Returns:

{‘asset_name’: astName, ‘dep_process’:dep_process, ‘dep_file’: dep_file, ‘checkout_method’: method}

get_dep_files(tsk_sk, context, depsTasks=None, depsVersions=None, version=-1, versionless=True, allInputs=False, host_application=None, prefer_usd=False, for_ast_sk=None, return_empty_deps=False, run_on_server=False)

Returns a list of files for all the dependency tasks for the given task context unlike get_task_filesDict, this method returns a file list of a specific snapshot type ‘main’ (file type) I regularly publish the main file of a task in type ‘main’ so, your concern here is always that you want to get all the files needed to assemble the current task, so you only need the ‘main’ file from every dependency task, while you are not interested about any other types like ‘playblast’ or ‘ref’ ..etc. depsVersions, if a list of versions is available, the version of each task will be returned from the available depsTasks. if no depsTasks are available, the depsVersions will be ignored and the provided verions number (defaults -1) will be returned. if both depsTasks and depsVersions are available, the arg version will be ignored it’s not recommended to run this method on the server since it doesn’t rely a lot on database queries as long as you didn’t provide depTasks, then it will be recommended in that case to run it on the server

get_dep_tasks(sk, process, respect_subcontext=True, project=None, allInputs=False, application=None, category=None, enable_forced_deps=False, run_on_server=False)

Gets the Dependency Tasks database dict sobjects of the given task sk and process (see example below).

Parameters:
  • sk – search key of the related task (Note: this is a task_sk not an asset_sk)
  • process – the related process
  • respect_subcontext – deps with the same input subcontext and deps that don’t have subcontexts will be returned, while others will be ignored.
  • allInputs – if True, all input tasks will be returned, if False, a smart function will return the needed tasks only.
  • enable_forced_deps – will enable the unconnected but always_dep_for processes to be returned

(the return of these processes is needed only when querying dependencies for users tasks, otherwise, unconnected should not be returend even when they are always dep) :param run_on_server: runs the function on the server side :return: input dependency tasks SOBJECTS

Example:

sk = 'sthpw/task?code=TASK00002008'
server.get_dep_tasks(sk, 'clothSim')
get_dependencies_file(ast_sk, context, version, snapshot=None)

Get the .dep file from the given asset information, if the dep file is not found on local project storage (in remote mode cases), then the .dep file will be downloaded from the main project server.

Parameters:
  • ast_sk – asset search key
  • context – task context
  • version – the version to get the file from
  • snapshot – if not None it will be used to get the .dep file directly from it.
Returns:

path to the queried/downloaded .dep file

get_dir_contents(root_dir)

Creates a directory contents file in json format, useful in cases if you want to download folder contents or in cases if you want to rename the directory contents.

Parameters:root_dir – dir of interest
Returns:{‘files’: [relative, files, paths, list…], ‘links’: [‘symlink->dst’ list…]}
get_expected_review_state(task)

Analyze the task status history and checks if this is a technical process or not, then inspects the most suitable review state for the given task (Review/Art Review)

Parameters:task
Returns:
get_files_types(files_paths)
Parameters:files_paths – a list of files paths
Returns:a list of nreal_brain files_types based on the files_types and dir_types in mime.conf (the snapshot type to be used when publishing files and querying snapshots)
get_first_available_user(users_list, ignore_approved=True, ignore_statuses=[])

Not Implemented Yet Get the first available user from the given list

Parameters:
  • users_list – the users list to select from
  • ignore_approved – when the user is assigned on Approved task, he is considered free
  • ignore_statuses – list of statuses to be ignored (in case want to ignored finished tasks)
Returns:

dict {user, free_from, free_to}, free_from and free_to can be None if the user has nothing before or after the availability

get_first_free_user_at(start_date, end_date, users_list, source_tasks, ignore_approved=True, ignore_statuses=[])

Get the first free user within the given period of time, note: if none of the given users is free at the given start date, the first user to be available will be returned.

Parameters:
  • start_date
  • end_date
  • users_list – the users list to select from
  • ignore_approved – when the user is assigned on Approved task, he is considered free
  • ignore_statuses – list of statuses to be ignored (in case want to ignored finished tasks)
Returns:

dict {user, from}, if no best one matches, ‘from’ has the max availability date

get_frame_range(proc_ctrl, asset, task, force_dynamic_range=False, run_on_server=False)

Return the frame range for the given task based on the approved edit xml from the most suitable editing task. if the given task has subcontext, Seneferu will look for edit tasks with the same subcontexts first, otherwise, will look for the editing tasks with no subcontext. Also, Seneferu will look for the editing tasks by reversed process order, in other words, he will look for the main project edit xml, if didn’t find an entry for the given the task belongs to, he will look in the first upstream process and so on.

Parameters:
  • proc_ctrl
  • asset – the asset the task belongs too (for example the shot containing the lighting task)
  • force_dynamic_range – force getting the dynamic range from an edit xml even if the given process in not dynamic
  • task

otherwise, the editing xml will be used as mentioned above :return: [start_frame, end_frame, frame_in, frame_out]

get_input_tasks(task, local_inputs_only=True)

Get the input tasks for the given task based on the Pipeline dependency graph

Parameters:
  • task
  • local_inputs_only – input tasks from the same asset of the given task will be returned
Returns:

list of input connected tasks

get_objects_by_path(task, path)

Walks through the given search types path by the entry point provided from the given task, and return all SObjects at this path

Parameters:
  • task
  • path – Dot separated linked search type path (bs/asset.bs/asset_in_shot.bs/shot)

Note: path arg can be a list of paths, then the return will be an accumilation from all of them. :return: list of sobject

get_out_sobjects(sk, task_stype)

Get the planned out sobjects of the asset owning the given task key (ie., return the assets using a given assembly task key)

Parameters:
  • sk – task search_key
  • task_stype – the search type of the asset owning the given task sk
Returns:

assets sobject list

get_output_tasks(task, local_outputs_only=True, respect_subcontext=True, ignore_approved=False, only_statuses=[], ignore_optional=False, ignore_occluded=False, respect_all_inputs_needed=False, run_on_server=False)

Get the output tasks for the given task based on the Pipeline dependency graph

Parameters:
  • task
  • local_outputs_only – Only output tasks from the same asset of the given task will be returned
  • respect_subcontext
  • ignore_approved – ignores the approved tasks from the return list
  • ignore_optional – optional dependencies will not be returned
  • ignore_occluded – ignore outputs that have returned inputs (if rig is returned as out for model, animation will not be returned)
  • only_statuses – returns only the tasks in the given statuses list
  • respect_all_inputs_needed – if True, and ignore_occluded requested, the occluded tasks will not be occluded if they have all_inputs_needed flag set
  • run_on_server – runs the function on the server side
Returns:

list of out connected tasks (note: the tasks are not ordered based on the workflow)

get_parent_assemblies(asset)

Returns recursively all the parent assembly bs/asset(s) that the given ‘assembly’ bs/asset is a child under them either directly or indirectly (like a child under a one of the children).

Parameters:asset – is bs/asset who is a member in one or more assembly assets.
Returns:list of parent assembly bs/asset(s) including the input itself as one of the list
get_process_control(process)
get_processes_ctrl(processes=[])

Get all processes_ctrl sobjects for the given processes list, or the whole table if no processes provided

Parameters:processes – processes of interest.
Returns:processes_ctrl table sobjects ordered as the given processes list.
get_processes_ctrl_table(order_bys=[])
Parameters:order_bys
Returns:[‘process’: {process_sobject}]
get_processes_sorted(processes, based_on_assign=False, based_on_milestone=False, collapse_parallel=False, based_on_order=False, reverse=True)
Parameters:
  • processes – list of processes
  • reverse – if reverse, processes will be returned as higher first
  • collapse_parallel – respects the situations when the processes order are equal
  • based_on_milestone – if enabled, collapse_parallel will have no effect, but the processes be collapsed based on milestones but arranged by order too
Returns:

a sorted tuple of processes with their sort order (sOrder, process), if enable_parallel, it returns

a list of parallel processes lists

Example:

if collapse_parallel, returns: [(1030, [‘model’]), (1035, [‘texture’, ‘rig’])] note: parallel_enabled

if not collapse_parallel, returns: [(1030, ‘model’), (1035, ‘texture’), (1035, ‘rig’)]

get_procs_count(processes, project='', run_on_server=False)
get_required_publish_types(process, application, process_ctrl)
get_resources_from_dep_files(sk, context, version, file_type, snapshot=None)

Gets a list of dependency files needed to open or access the given asset context based on the dependencies information found in the ‘dep’ files provided in given asset context and each dependency inside recursively till the first nodes in the construction graphs of this asset.

Parameters:
  • sk – the asset database search_key
  • context – the task process/subcontext (or process)
  • version – the snapshot task version from where you want to get the dep file (for example dep file of char_rig_v003.ma, then you need v=3)
  • file_type – the file type from the current provided context which we are quering the dependency files for
  • snapshot – you can optionally pass a snapshot to get the dep file from it
Returns:

dep file path

get_source_filenames(snapshot)

Gets the source filename(s) that were published at the first place into the given snapshot before they get renamed by the naming server :param snapshot: the snapshot of the version of interest :return: list (original file name)

get_task(task_code)

return the task sobject for the given task_code

get_task_files(sk, context, type='main', version=-1, versionless=True, use_queue=False)

By default returns the file of type ‘main’ (main file of the task, like the maya file of rig task. but if type is ‘DICT’, then the whole task snapshot files dictionary will be returned as pairs of {‘type’:[files]}

Parameters:type – if ‘DICT’, then the whole task snapshot files dictionary will be returned as pairs of {‘type’:[files]} if ‘media’, then will return a list of media files paths (based on the types that were configured mime.conf) otherwise, it will return the file path of the given type (ie, type=’maya’ will return the path of the maya .ma if you used to publish the ‘.ma’ scenes as of type ‘maya’)
get_task_icon(sk)
Parameters:sk – the task search_key
Returns:the task icon file path for the given task search_key
get_task_of_process(sk, context)

returns the task SOBJECT for a given asset sk and a context=(process/subcontext)

get_task_parent(task_code)

task_code: Return the asset sobject this task was created for.

get_task_parent_info(task, task_parent=None)

In most cases you need this for building a human readable UI on the client side where you want to show the nice name of the asset, asset_in_shot ..etc.

Parameters:
  • task – the task sobject.
  • task_parent – the asset owning the task.
Returns:

dict {‘parent_sobject’: task_parent, ‘parent_name’: astName, ‘category’: category, ‘parent_sk’: task_parent[‘__search_key__’], ‘stype’: stype}

get_tasks_by_ids(ids)

returns all tasks with the given task ids list

get_tasks_data(tasks, procs_ctrls=False, get_versions=False, run_on_server=False)

Gets the tasks’ parents sobjects and processes_ctrl sobjects for the given tasks in one database query transaction for each of the parents list and the proc_ctrls list.

Parameters:
  • tasks – list of tasks
  • procs_ctrls – Decides whether or not the processes_ctrl will be queried
  • get_versions – a dict key for versions available in each task will also be returned
  • run_on_server – runs the function on the server side
Returns:

dictionary of parents and processes_ctrls lists {‘parents’: [], ‘procs_ctrls’: [], ‘versions’: [{}]}

get_tasks_parents(tasks)

Gets the task parents sobjects in one database query transaction.

Parameters:tasks – list of tasks
Returns:list of asset sobjects in the same respective order of the given tasks.
get_tasks_sorted_by_order(tasks, reverse=True, collapse_parallel=False, collapse_singles=False, respect_subcontext=False)

Orders the given tasks based on the project workflow

Parameters:
  • tasks – list of tasks
  • reverse – The returned tasks will be as higher order first
  • collapse_parallel – all tasks with the same process order will be combined in one list
  • collapse_singles – if True, then single ordered tasks will be returned in lists ([[task1], [task2]..]), this is needed for compatibility with collapse_parallel
  • respect_subcontext – within the same process, tasks having subcontexts comes after the tasks don’t have subcontext
Returns:

[task1, task2..] (or [[task1], [task2]..] when collapse_singles) or [[parallel_tasks], [parallel_tasks]…] when collapse_parallel

get_tasks_sorted_by_output_count(tasks, run_on_server=False)
get_user_nreal_tasks(user=None, project=None, status=None, statusList=None, as_supervisor=False, enable_icons=True, enable_versions=False, enable_proc_ctrl=False, run_on_server=False)

Gets a list of compound task objects where the same task has all needed information by NReal System all embedded into one dictionary, this is useful to minimize the number of queries and rpc calls for better performance specially for client side applications

Parameters:
  • user
  • project
  • status
  • statusList
  • as_supervisor
  • enable_icons
  • enable_parents
  • enable_versions
  • enable_proc_ctrl
  • run_on_server
Returns:

NReal formatted task dictionary

get_user_performance(user)

Gets the performance rate of the given user

Parameters:user
Returns:float (0.0-1.0)
get_user_tasks(user=None, project=None, status=None, statusList=None, as_supervisor=False, context_starts_with=False)

Gets list of tasks based on the given args

Parameters:
  • user – the user the task is assigned to
  • project – project_code
  • status – query tasks with the given status only
  • statusList – if a list given, the tasks with the given status will be returned (overriding the status arg)
  • context_starts_with – tasks with context starting by the given value
  • as_supervisor – if on the head Supervisors (like ‘Director’, ‘Production’, ‘Client’, ‘VFX/CG Sup’), then the user arg is forced to None,

and all tasks with the given args will be returned, otherwise the supervisor tasks will be returned, while if the given user is not supervisor, then the tasks assigned to him only will be returned Note: this function is meant for Task Manager internal uses, for external developments, it’s recommended that you query the tasks based on your requirements using the direct Tactic API :return: Tasks list

most_suitable_user(task, users, at=None, single=True, ignore_statuses=['Approved'], exclude_tasks=[], respect_leaves=True, consider_user_performance=True, consider_user_skills=True, log_file=None, weekend_days=[5, 6])

Finds the most suitable user to do the given task the given time, when nobody is suitable to at this time, the earliest possible user will be returned

Parameters:
  • task
  • users – a list of users to test, if department name provided in the users list, it will always be the most suitable user to be returned, you would do this, when

you want to leave the user assignments for the departments heads later :param at: (‘%Y-%m-%d %H:%M:%S’/datetime), if the desired task start time is not given, the bid_start_date of the task is used instead :param single: if not, the returned will be the list of suitable users ordered from best to less (excluding the non suitable) :param ignore_statuses: (if the user has task in these statuses at the time, then will be considered free) :param respect_leaves: respects the unavailability of the user from the leaves table :param exclude_tasks: a list of task codes to be considered as not exist :param consider_user_performance: considers the user performance rate by Seneferu (if no suitable users - the most suitable of the given users will be returned) :param consider_user_skills: considers the user skill level and tasks difficulties levels when getting the most suitable user (if no suitable users - the most suitable of the given users will be returned) :return: list of dictionaries of user, from (datetime), busy_in timedelta(days=3600000) or duration of time before becomes busy (-1 when will never get busy), first_availability {from, to}

will have the information about the first time the user was free after/at the given time but for a period of time that doesn’t fit the given task.
order_tasks_by_duration(tasks, reverse=False)

Reorder the given tasks by bid duration length

Parameters:
  • tasks
  • reverse – longer task comes first
Returns:

reordered tasks

production_manager_notification(message, task, all_affected_users=True, refresh_manager=True, user=None, additional_users=[], exclude_users=[])

Pops up a ‘Task Manager’ message to the assigned user(s) of the given task

Parameters:
  • message – the message
  • task – the task of interest
  • all_affected_users – whether the message should be sent to all users assigned on the same task or only to the direct assigned user on the task
  • refresh_manager – whether or not the Task Manager will be refreshed after the user confirm the message
  • user – if a user name provided, the notification will be directed to the given user only
  • additional_users – if additional list of user names provided, then these users will receive the same notification the other users (or the direct

provided user) has received. :param exclude_users: a list of users to be excluded from the notification popup. :param task_sc: task search_code :param task_ctx: task context :return: None

publish_to_dailies(task, description='', paths=None, paths_types=None, asset_name=None, mode='copy', checkin_type='auto', create_icon=False, use_queue=False, overwrite=False)

Checking files to bs/dailies under the given context inside the today sobject (creates today sobject if not exist) the created/used daily entry is a child of the related user task that these file is published for its revision. Note: When publishing to dailies, it’s allowed to publish several files from the same type to the same snapshot.

Parameters:
  • tsk_sk
  • context
  • description
  • paths
  • paths_types
  • asset_name
  • mode
  • checkin_type
  • create_icon
  • use_queue
Returns:

status

query_cache_tags(context)

Returns the tag string that should be used for this process when caching to abc/usd or when tagging objects for future cache.

query_current_version(ast_sk, context)
Returns:the current version number padded(3) of the given ast/context
query_latest_version(ast_sk, context)
Returns:the latest version number padded(3)
query_publish_types(context, application)

Resolves the publish_types expression from bs/processes_ctrl in comparison to the given application and returns the file types needs to be published by this given application.

Return: a list of dictionaries for {‘file_type’: type, ‘range’: range_description} that needs to be published by the current application. file_type is a snapshot type, while cache range is either ‘single’ or ‘full_range’, the cache_range is needed to decide if the cache or render should be for one frame (the first frame) or the full range of the timeline. Note: the cache range is decided based on the search_type (the database table), so, anything related to bs/asset should return ‘single’ cache range while things related to bs/shot for example whould return cache range=’full_range’

query_resources_from_dep_file(dep_file, for_type=None, include_current=True)

Query all the files been used as dependencies for a given asset dep_file, This is usually needed to query all files need to be copied with the current asset in cases of checkout remotely, or when copying an asset to different repository where all its deps are needed.

Parameters:
  • dep_file – the .dep file being generated usually with any task publish
  • include_current – if True, the current version of each asset member in the dep_file will also be included in returned list, this applies only when main task owning the dep_file is configured in processes_ctrl to NOT ‘use_versioned_deps’, which means that this current asset is configured to have its dependencies getting updated automatically with each new approved/current version.
Returns:

All dependency files referenced in the given json .dep file in the format [{‘versioned’: file_path, ‘versionless’: xfile_path}]

query_task_versions(ast_sk, context)
Returns:‘current’, ‘latest’ version numbers and ‘others’ versions (all padded(3))
query_tasks_versions(tasks)

Similar to query_task_versions but for a given list of tasks, note that this method is way much more faster than query_task_versions

Parameters:tasks
Returns:dict of {task_code: ‘current’, ‘latest’ version numbers and ‘others’ versions (all padded(3))}
rename_dir_files(root_dir, asset_name, context, dir_contents, renumber_sequence=False)

Rename all the files under the given folder as <asset_name>_<context>.{ext}

Parameters:
  • root_dir – directory path containing the files to be renamed.
  • asset_name – this is the nice asset name
  • context
  • dir_contents – directory contents dictionary, produced by the method get_dir_contents()
  • renumber_sequence – if the folder has an image sequence, the images number will be renumbered to start from 0001
Returns:

status

revert_task_status(task_code, task=None, recurse_on_approval=True, query=False, triggers=False)

Reverts the task status to the previous status before the current one Or just query that status

Parameters:
  • task_code – the task code (note: all linked tasks will also have their status changed even if triggers are False)
  • recurse_on_approval – allows a simplified trigger to check if the reverting of the task to Approved should affect any downstream tasks (in most cases, waiting tasks, should be reverted to ready, if the reverted task is reverted to approved). Note: if triggers enabled, then this flag has no effect since the triggers will the same job but in more depth
  • query – the tasks status will not be updated but the new status will just be returned
  • triggers – decides whether or not the triggers should be enabled
Returns:

the new status

schedule_tasks(tasks=[], method='forward', date='today', from_date='', to_date='', ignore_approved=False, delete_invalid_tasks=False, resources_data={}, auto_assign_tasks=False, create_milestone=True, create_hire_plan=True, create_resources_plan=True, consider_user_performance=False, consider_user_skills=True, log_file='', keep_wip_user=True, assignment_method=2, assets_importance={}, ignore_processes=[], preferred_characters_order=[], preferred_scenes_order=[], preferred_sets_order=[])

schedule the given tasks following the specified method

Parameters:
  • tasks – any list of tasks (the list can be mixed tasks from different projects)
  • method – forward|backward in relation to the given date
  • date – str() in ‘%Y-%m-%d’ format. The schedule goes from the given date or upto the given date depending on the specified method
  • from_date
  • to_date
  • ignore_approved
  • delete_invalid_tasks – Deletes project invalid tasks before running the schedule process
  • resources_data – json/schedule_project dictionary having the processes and the users can do each one of them
  • auto_assign_tasks – re-assign the tasks to users based on the given resources_data
  • create_milestone – creates/updates the project milestone for each department group of processes
  • create_hire_plan – create/update the hire plan/contract for each user involved in the project based on the tasks assignments.
  • log_file – The log of the operation will be written to this file
  • keep_wip_user – tasks in Progress/Paused will not be reassigned to different users
  • ignore_processes – processes to be ignored (treated as finished)
  • preferred_characters_order – CG character building order (the system detects the hero stars order by analyzing the project breakdown), this can be overridden here
  • preferred_scenes_order – CG Shots order only, this will be overridden by the sets execution order for Live/VFX projects.
  • preferred_sets_order – for Live/VFX Production, this overrides the shooting sets/locations order, by default, Seneferu will recommend

shooting assets with heavier VFX works first :param assignment_method: There are three methods of auto-assigning tasks during the scheduling process

  1. Assign all inplace based on users load and tasks count.
  2. Assign within the scheduling process based availability/project execution simulation (more efficient - a little bit slower)
  3. Running the schedule/assign operation in serial mode rather than parallel mode (this maintains the assets execution orders, but may miss with the shots execution order, although it is the best and more efficient method).
Returns:status
set_path_version(path, version)

Lookup the given path for any /v###/ or any _v### and replace them by the given version

Parameters:
  • path – a path has a version
  • version – target version
Returns:

the path after setting the version

set_project(project)
set_status_to_review(task, triggers=True)

Analyze the task status history and checks if this is a technical process or not, then inspects and sets the task status to the most suitable review state from (Review/Art Review)

Parameters:
  • task
  • triggers
Returns:

the used Review State

sort_assets_by_importance(preferred_sets_order=[], preferred_scenes_order=[], preferred_characters_order=[], run_on_server=False)

Analyzes the project database and detects which asset is more important than another, by knowing who is hero assets and who is extra,

the system brain has information useful in a several areas help him to know how to assign tasks and to who, which asset is most probably high res and

which will most likely be low res, which asset to be scheduled first when scheduling the project and which comes last…etc.

note that you can override the produced assets orders by providing your pre-ordered characters and others lists, however,

you cannot prioritize an asset over its children assets that have to be done first to build that asset.

Parameters:
  • preferred_sets_order – allow reordering the live sets/locations shooting order (this will overwrite any overriding you did in the shots order)
  • preferred_characters_order – allow reordering the characters, however, if non making sense order is given, the brain will ignore/fix them.
  • preferred_scenes_order – preferred shots order execution (this will be overridden by locations order for VFX/Live action)
Returns:

OrderedDict of asset_code and occurrence count pairs

sort_tasks_based_on_assets_importance(tasks, importance_data={})

Orders the tasks based on assets importance followed by sorted seq/scn/shots

Parameters:
  • tasks – the tasks to be sorted
  • importance_data – (not implemented yet) assets importance data object returned by sort_assets_by_importance() method,

when overriden here, the given modified data are used instead of the one produced by the system analysis :return: list of sorted tasks

sync_asset_files(sk, context, file_type, version=0, force_versionless=True, queue=None)

Submits a file transfer job for dependencies and files needed by the given asset task,

Parameters:
  • sk – the asset search_key
  • context – the task context
  • file_type – the file type of the asset you are synchronizing (as provided when checking it in.. like ‘maya’, ‘katana’… etc)
  • version – the version of the asset to be synchronized (0 is the current version)
  • force_versionless – will create the versionless symlinks even if the versioned files are not yet exist
  • queue – the queue to be used for file transfer (if None, the configured pipeline[‘QUEUE’][‘SYSTEM’] is used)
Returns:

in_queue or files_ready

sync_render_tasks(current_task, ast_sk, layersInfo_file, process)

Creates and sets the status and subcontexts for the upcoming ‘render’ tasks based on the given layersInfo_file and the current_(lighting)_task for the given ast_sk

Returns:all the renderable tasks that needs rendering
tasks_having_issues(as_sup=False)
test(tsk_sk, allInputs=False, application=None)

debugging the task dependencies returned for the given task sk for the given application

Parameters:
  • tsk_sk
  • allInputs
  • application
Returns:

update_remote_versionless(versioned_dst_paths, versionless_files)

deprecated… instead, we are syncing the versionless via the global file server to avoid permission issues and to support unix symlinks update on windows clients Updated The symlinks on this Remote Storage for the given versionless files based on how they are set on the Tactic Server Storage, so the remote users/vendors can have their versionless files pointing to the proper asset version.

Parameters:
  • versionless_files – list of versionless files paths
  • versioned_dst_paths – list of destinations of these files.
Returns:

None

Note: this function is for internal use, and being used by the syncing mechanism and not intended for direct use.

validate_user_timelogs(user, date, timelog_table='sthpw/user_timelogs')
class nreal_brain.OrderedCounter(iterable=None, **kwds)
nreal_brain.decode(key, enc)
nreal_brain.encode(key, clear)
nreal_brain.my_excepthook(type, value, tback)
nreal_brain.process_events(event, server, sk, current_data, prev_data=None, update_status=None, prev_status=None, search_type=None, project_code=None)

Runs a specific trigger based on the given event, currently this works only with Nothing-Real Tactic Template, to override the whole trigger process, you have to write your own asset management system.

Parameters:
  • event – the event name (like “asset_inserted”, “asset_updated”… etc).
  • server
  • sk
  • current_data
  • prev_data
  • update_status
  • prev_status
  • search_type
  • project_code
Returns:

nreal_brain.process_status(server, search_key, current_task, prev_data={}, update_status='', prev_status='', disable_revisit=False, respect_subcontext=True, user=None, dg_xml=None, proc_ctrl='processes_ctrl', desktop_notifications_enabled=True, refresh_manager=True, query_only=False)

The brain will traverse through the project dependency graph and see what needs to be done with any task related to the being processed one either directly or indirectly, then it will set statuses and send notifications to whome it may concern.

Parameters:
  • server – The database server (currently only Tactic backend is supported via the DB translation module, to support you own backend, a db translator needs to be developed (not recommended for a release prior to v2.0 of the brain)
  • dg_xml – Dependency Graph workflow in xml format describes the INs and OUTs of each process, if not provided, the brain will consider using the one provided in the NReal Tactic Project Template.
  • proc_ctrl – the NReal processes control database table name
  • current_task – the currently being processed task
  • search_key – the task unique identifier of the current task
  • prev_data – dictionary of the original task data before the update (only the updated - as key/value pairs)
  • update_status – the new task status
  • prev_status – the previous task status before the update
  • disable_revisit – optionally disables setting affected Approved tasks to “Revisit”
  • respect_subcontext – the task subcontext will be taken in account in addition to the process, refer to NReal Brain documentation for more details.
  • query_only – Just returns the tasks to be affected if the given task is approved without altering any tasks statuses.
  • user – Optionally - the user logged in to the server and made the transaction (if not provided will be queried)
  • desktop_notifications_enabled – in addition to notes and email notifications, a desktop popup notifications will be sent to whom it may concern via every user’s Production Manager application
  • refresh_manager – Refreshes the Production Manager for the affected users so the interface showing the upto date data.
Returns:

list of affected tasks