Doctests
SENAITE LIMS API
The SENAITE LIMS API provides single functions for single purposes.
This Test builds completely on the API without any further imports needed.
Running this test from the buildout directory:
bin/test test_textual_doctests -t API
API
The purpose of this API is to help coders to follow the DRY principle (Don’t
Repeat Yourself). It also ensures that the most effective and efficient method is
used to achieve a task.
Import it first:
>>> from bika.lims import api
Setup the test user
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> from senaite.app.supermodel import SuperModel
Getting the Portal
The Portal is the SENAITE LIMS root object:
>>> portal = api.get_portal()
>>> portal
<PloneSite at /plone>
Getting the SENAITE Setup object
The Setup object gives access to all of the SENAITE configuration settings:
>>> setup = api.get_setup()
>>> setup
<BikaSetup at /plone/bika_setup>
>>> bika_setup = api.get_bika_setup()
>>> setup
<BikaSetup at /plone/bika_setup>
>>> setup == bika_setup
True
Since version 2.3.0 we provide a Dexterity based setup folder to hold configuration values:
>>> senaite_setup = api.get_senaite_setup()
>>> senaite_setup
<Setup at /plone/setup>
Creating new Content
Creating new contents in SENAITE LIMS requires some special knowledge.
This function helps to do it right and creates a content for you.
Here we create a new Client in the plone/clients folder:
>>> client = api.create(portal.clients, "Client", title="Test Client")
>>> client
<Client at /plone/clients/client-1>
>>> client.Title()
'Test Client'
Created objects are properly indexed:
>>> services = self.portal.bika_setup.bika_analysisservices
>>> service = api.create(services, "AnalysisService",
... title="Dummy service", Keyword="DUM")
>>> uid = api.get_uid(service)
>>> catalog = api.get_tool("senaite_catalog_setup")
>>> brains = catalog(portal_type="AnalysisService", UID=uid)
>>> brains[0].getKeyword
'DUM'
Editing Content
This function helps to edit a given content.
Here we update the Client we created earlier, an AT:
>>> api.edit(client, AccountNumber="12343567890", BankName="BTC Bank")
>>> client.getAccountNumber()
'12343567890'
>>> client.getBankName()
'BTC Bank'
It also works for DX content types:
>>> api.edit(senaite_setup, site_logo_css="my-test-logo")
>>> senaite_setup.getSiteLogoCSS()
'my-test-logo'
The field need to be writeable:
>>> field = client.getField("BankName")
>>> field.readonly = True
>>> api.edit(client, BankName="Lydian Lion Coins Bank")
Traceback (most recent call last):
[...]
ValueError: Field 'BankName' is readonly
>>> client.getBankName()
'BTC Bank'
>>> field.readonly = False
>>> api.edit(client, BankName="Lydian Lion Coins Bank")
>>> client.getBankName()
'Lydian Lion Coins Bank'
And user need to have enough permissions to change the value as well:
>>> field.write_permission = "Delete objects"
>>> api.edit(client, BankName="Electrum Coins")
Traceback (most recent call last):
[...]
Unauthorized: Field 'BankName' is not writeable
>>> client.getBankName()
'Lydian Lion Coins Bank'
Unless we manually force to bypass the permissions check:
>>> api.edit(client, check_permissions=False, BankName="Electrum Coins")
>>> client.getBankName()
'Electrum Coins'
Restore permission:
>>> field.write_permission = "Modify Portal Content"
Getting an Object
Getting the object from a catalog brain is a common task.
This function provides an unified interface to portal objects and brains.
Furthermore it is idempotent, so it can be called multiple times in a row.
We will demonstrate the usage on the client object we created above:
>>> api.get_object(client)
<Client at /plone/clients/client-1>
>>> api.get_object(api.get_object(client))
<Client at /plone/clients/client-1>
Now we show it with catalog results:
>>> brains = api.search({"portal_type": "Client"})
>>> brains
[<Products.ZCatalog.Catalog.mybrains object at 0x...>]
>>> brain = brains[0]
>>> api.get_object(brain)
<Client at /plone/clients/client-1>
>>> api.get_object(api.get_object(brain))
<Client at /plone/clients/client-1>
The function also accepts a UID:
>>> api.get_object(api.get_uid(brain))
<Client at /plone/clients/client-1>
And also accepts SuperModel objects:
>>> api.get_object(SuperModel(brain))
<Client at /plone/clients/client-1>
And returns the portal object when UID==”0”
>>> api.get_object("0")
<PloneSite at /plone>
No supported objects raise an error:
>>> api.get_object(object())
Traceback (most recent call last):
[...]
APIError: <object object at 0x...> is not supported.
>>> api.get_object("i_am_not_an_uid")
Traceback (most recent call last):
[...]
APIError: 'i_am_not_an_uid' is not supported.
However, if a default value is provided, the default will be returned in such
a case instead:
>>> api.get_object(object(), default=None) is None
True
To check if an object is supported, e.g. is an ATCT, Dexterity, ZCatalog or
Portal object, we can use the is_object function:
>>> api.is_object(client)
True
>>> api.is_object(brain)
True
>>> api.is_object(api.get_portal())
True
>>> api.is_object(SuperModel(client))
True
>>> api.is_object(None)
False
>>> api.is_object(object())
False
Checking if an Object is the Portal
Sometimes it can be handy to check if the current object is the portal:
>>> api.is_portal(portal)
True
>>> api.is_portal(client)
False
>>> api.is_portal(object())
False
Checking if an Object is a Catalog Brain
Knowing if we have an object or a brain can be handy. This function checks this for you:
>>> api.is_brain(brain)
True
>>> api.is_brain(api.get_object(brain))
False
>>> api.is_brain(object())
False
Checking if an Object is a Dexterity Content
This function checks if an object is a Dexterity content type:
>>> api.is_dexterity_content(client)
False
>>> api.is_dexterity_content(portal)
False
It is also possible to check by portal type:
>>> api.is_dx_type("InterpretationTemplate")
True
>>> api.is_dx_type("Client")
False
Checking if an Object is an AT Content
This function checks if an object is an Archetypes content type:
>>> api.is_at_content(client)
True
>>> api.is_at_content(portal)
False
>>> api.is_at_content(object())
False
It is also possible to check by portal type:
>>> api.is_at_type("Client")
True
>>> api.is_at_type("InterpretationTemplate")
False
Getting the Schema of a Content
The schema contains the fields of a content object. Getting the schema is a
common task, but differs between ATContentType based objects and Dexterity
based objects. This function brings it under one umbrella:
>>> schema = api.get_schema(client)
>>> schema
<Products.Archetypes.Schema.Schema object at 0x...>
Catalog brains are also supported:
>>> api.get_schema(brain)
<Products.Archetypes.Schema.Schema object at 0x...>
Getting the behaviors of a type
Dexterity contents might extend schema fields over a behavior.
This function shows the current active behaviors:
>>> api.get_behaviors("SampleContainer")
(...)
It is possible to enable behaviors dynamically:
>>> "plone.basic" in api.get_behaviors("SampleContainer")
False
>>> api.enable_behavior("SampleContainer", "plone.basic")
>>> "plone.basic" in api.get_behaviors("SampleContainer")
True
And remove it again:
>>> api.disable_behavior("SampleContainer", "plone.basic")
>>> "plone.basic" in api.get_behaviors("SampleContainer")
False
Getting the Fields of a Content
The fields contain all the values that an object holds and are therefore
responsible for getting and setting the information.
This function returns the fields as a dictionary mapping of {“key”: value}:
>>> fields = api.get_fields(client)
>>> fields.get("ClientID")
<Field ClientID(string:rw)>
Catalog brains are also supported:
>>> api.get_fields(brain).get("ClientID")
<Field ClientID(string:rw)>
Getting the ID of a Content
Getting the ID is a common task in SENAITE LIMS.
This function takes care that catalog brains are not woken up for this task:
>>> api.get_id(portal)
'plone'
>>> api.get_id(client)
'client-1'
>>> api.get_id(brain)
'client-1'
Getting the Title of a Content
Getting the Title is a common task in SENAITE LIMS.
This function takes care that catalog brains are not woken up for this task:
>>> api.get_title(portal)
'SENAITE LIMS'
>>> api.get_title(client)
'Test Client'
>>> api.get_title(brain)
'Test Client'
Getting the Description of a Content
Getting the Description is a common task in SENAITE LIMS.
This function takes care that catalog brains are not woken up for this task:
>>> api.get_description(portal)
''
>>> api.get_description(client)
''
>>> api.get_description(brain)
''
Getting the UID of a Content
Getting the UID is a common task in SENAITE LIMS.
This function takes care that catalog brains are not woken up for this task.
The portal object actually has no UID. This funciton defines it therefore to be 0:
>>> api.get_uid(portal)
'0'
>>> uid_client = api.get_uid(client)
>>> uid_client_brain = api.get_uid(brain)
>>> uid_client is uid_client_brain
True
If a UID is passed to the function, it will return the value unchanged:
>>> api.get_uid(uid_client) == uid_client
True
Getting the URL of a Content
Getting the URL is a common task in SENAITE LIMS.
This function takes care that catalog brains are not woken up for this task:
>>> api.get_url(portal)
'http://nohost/plone'
>>> api.get_url(client)
'http://nohost/plone/clients/client-1'
>>> api.get_url(brain)
'http://nohost/plone/clients/client-1'
Getting the Icon of a Content
>>> api.get_icon(client)
'<img width="16" height="16" src="http://nohost/plone/senaite_theme/icon/client" title="Test Client" />'
>>> api.get_icon(brain)
'<img width="16" height="16" src="http://nohost/plone/senaite_theme/icon/client" title="Test Client" />'
>>> api.get_icon(client, html_tag=False)
'http://nohost/plone/senaite_theme/icon/client'
>>> api.get_icon(client, html_tag=False)
'http://nohost/plone/senaite_theme/icon/client'
Getting a catalog brain by UID
This function finds a catalog brain by its uinique ID (UID):
>>> api.get_brain_by_uid(api.get_uid(client))
<Products.Archetypes.UIDCatalog.plugbrains object at ...>
Getting an object by UID
This function finds an object by its uinique ID (UID).
The portal object with the defined UId of ‘0’ is also supported:
>>> api.get_object_by_uid('0')
<PloneSite at /plone>
>>> api.get_object_by_uid(uid_client)
<Client at /plone/clients/client-1>
>>> api.get_object_by_uid(uid_client_brain)
<Client at /plone/clients/client-1>
If a default value is provided, the function will never fail. Any exception
or error will result in the default value being returned:
>>> api.get_object_by_uid('invalid uid', 'default')
'default'
>>> api.get_object_by_uid(None, 'default')
'default'
Getting an object by Path
This function finds an object by its physical path:
>>> api.get_object_by_path('/plone')
<PloneSite at /plone>
>>> api.get_object_by_path('/plone/clients/client-1')
<Client at /plone/clients/client-1>
Paths outside the portal raise an error:
>>> api.get_object_by_path('/root')
Traceback (most recent call last):
[...]
APIError: Not a physical path inside the portal.
Any exception returns default value:
>>> api.get_object_by_path('/invaid/path', 'default')
'default'
>>> api.get_object_by_path(None, 'default')
'default'
Getting the Physical Path of an Object
The physical path describes exactly where an object is located inside the portal.
This function unifies the different approaches to get the physical path and does
so in the most efficient way:
>>> api.get_path(portal)
'/plone'
>>> api.get_path(client)
'/plone/clients/client-1'
>>> api.get_path(brain)
'/plone/clients/client-1'
>>> api.get_path(object())
Traceback (most recent call last):
[...]
APIError: <object object at 0x...> is not supported.
Getting the Physical Parent Path of an Object
This function returns the physical path of the parent object:
>>> api.get_parent_path(client)
'/plone/clients'
>>> api.get_parent_path(brain)
'/plone/clients'
However, this function goes only up to the portal object:
>>> api.get_parent_path(portal)
'/plone'
Like with the other functions, only portal objects are supported:
>>> api.get_parent_path(object())
Traceback (most recent call last):
[...]
APIError: <object object at 0x...> is not supported.
Getting the Parent Object
This function returns the parent object:
>>> api.get_parent(client)
<ClientFolder at /plone/clients>
Brains are also supported:
>>> api.get_parent(brain)
<ClientFolder at /plone/clients>
However, this function goes only up to the portal object:
>>> api.get_parent(portal)
<PloneSite at /plone>
Like with the other functions, only portal objects are supported:
>>> api.get_parent(object())
Traceback (most recent call last):
[...]
APIError: <object object at 0x...> is not supported.
Searching Objects
Searching in SENAITE LIMS requires knowledge in which catalog the object is indexed.
This function unifies all SENAITE LIMS catalog to a single search interface:
>>> results = api.search({'portal_type': 'Client'})
>>> results
[<Products.ZCatalog.Catalog.mybrains object at 0x...>]
Now we create some objects which are located in the senaite_catalog_setup:
>>> instruments = bika_setup.bika_instruments
>>> instrument1 = api.create(instruments, "Instrument", title="Instrument-1")
>>> instrument2 = api.create(instruments, "Instrument", title="Instrument-2")
>>> instrument3 = api.create(instruments, "Instrument", title="Instrument-3")
>>> results = api.search({'portal_type': 'Instrument', 'sort_on': 'getId'})
>>> len(results)
3
>>> map(api.get_id, results)
['instrument-1', 'instrument-2', 'instrument-3']
Queries which result in multiple catalogs will be refused, as it would require
manual merging and sorting of the results afterwards. Thus, we fail here:
>>> results = api.search({'portal_type': ['Client', 'ClientFolder', 'Instrument'], 'sort_on': 'getId'})
Traceback (most recent call last):
[...]
APIError: Multi Catalog Queries are not supported!
Catalog queries w/o any portal_type, default to the uid_catalog:
>>> analysiscategories = bika_setup.bika_analysiscategories
>>> analysiscategory1 = api.create(analysiscategories, "AnalysisCategory", title="AC-1")
>>> analysiscategory2 = api.create(analysiscategories, "AnalysisCategory", title="AC-2")
>>> analysiscategory3 = api.create(analysiscategories, "AnalysisCategory", title="AC-3")
>>> results = api.search({"id": "analysiscategory-1"})
>>> len(results)
1
>>> res = results[0]
>>> res.aq_parent
<UIDCatalog at /plone/uid_catalog>
Would we add the portal_type, the search function would ask the
archetype_tool for the right catalog, and it would return a result:
>>> results = api.search({"portal_type": "AnalysisCategory", "id": "analysiscategory-1"})
>>> len(results)
1
We could also explicitly define a catalog to achieve the same:
>>> results = api.search({"id": "analysiscategory-1"}, catalog="senaite_catalog_setup")
>>> len(results)
1
To see inactive or dormant items, we must explicitly query them of filter them
afterwars manually:
>>> results = api.search({"portal_type": "AnalysisCategory", "id": "analysiscategory-1"})
>>> len(results)
1
Now we deactivate the item:
>>> analysiscategory1 = api.do_transition_for(analysiscategory1, 'deactivate')
>>> api.is_active(analysiscategory1)
False
The search will still find the item:
>>> results = api.search({"portal_type": "AnalysisCategory", "id": "analysiscategory-1"})
>>> len(results)
1
Unless we filter it out manually:
>>> len(filter(api.is_active, results))
0
Or provide a correct query:
>>> results = api.search({"portal_type": "AnalysisCategory", "id": "analysiscategory-1", "is_active": False})
>>> len(results)
1
Getting the registered Catalogs
SENAITE LIMS uses multiple catalogs registered via the Archetype Tool. This
function returns a list of registered catalogs for a brain or object:
>>> api.get_catalogs_for(client)
[...]
>>> api.get_catalogs_for(instrument1)
[...]
>>> api.get_catalogs_for(analysiscategory1)
[...]
Getting an Attribute of an Object
This function handles attributes and methods the same and returns their value.
It also handles security and is able to return a default value instead of
raising an Unauthorized error:
>>> uid_brain = api.safe_getattr(brain, "UID")
>>> uid_obj = api.safe_getattr(client, "UID")
>>> uid_brain == uid_obj
True
>>> api.safe_getattr(brain, "review_state")
'active'
>>> api.safe_getattr(brain, "NONEXISTING")
Traceback (most recent call last):
[...]
APIError: Attribute 'NONEXISTING' not found.
>>> api.safe_getattr(brain, "NONEXISTING", "")
''
Getting the UID Catalog
This tool is needed so often, that this function just returns it:
>>> api.get_uid_catalog()
<UIDCatalog at /plone/uid_catalog>
Getting the Review History of an Object
The review history gives information about the objects’ workflow changes:
>>> review_history = api.get_review_history(client)
>>> sorted(review_history[0].items())
[('action', None), ('actor', 'test_user_1_'), ('comments', ''), ('review_state', 'active'), ('time', DateTime('...'))]
Getting the Revision History of an Object
The review history gives information about the objects’ workflow changes:
>>> revision_history = api.get_revision_history(client)
>>> sorted(revision_history[0])
['action', 'actor', 'actor_home', 'actorid', 'comments', 'review_state', 'state_title', 'time', 'transition_title', 'type']
>>> revision_history[0]["transition_title"]
u'Create'
Getting the assigned Workflows of an Object
This function returns all assigned workflows for a given object:
>>> api.get_workflows_for(bika_setup)
('senaite_setup_workflow',)
>>> api.get_workflows_for(client)
('senaite_client_workflow',)
This function also supports the portal_type as parameter:
>>> api.get_workflows_for(api.get_portal_type(client))
('senaite_client_workflow',)
Getting the Workflow Status of an Object
This function returns the state of a given object:
>>> api.get_workflow_status_of(client)
'active'
It is also able to return the state from a brain without waking it up:
>>> api.get_workflow_status_of(brain)
'active'
It is also capable to get the state of another state variable:
>>> api.get_workflow_status_of(client, "review_state")
'active'
Deactivate the client:
>>> api.do_transition_for(client, "deactivate")
<Client at /plone/clients/client-1>
>>> api.get_workflow_status_of(client)
'inactive'
Reactivate the client:
>>> api.do_transition_for(client, "activate")
<Client at /plone/clients/client-1>
>>> api.get_workflow_status_of(client)
'active'
Getting the previous Workflow Status of an Object
This function gives the last worflow state of an object:
>>> api.get_workflow_status_of(client)
'active'
>>> api.get_previous_worfklow_status_of(client)
'inactive'
Specific states can be skipped:
>>> api.get_previous_worfklow_status_of(client, skip=['inactive'])
'active'
A default value can be set in case no previous state was found:
>>> api.get_previous_worfklow_status_of(client, skip=['active' ,'inactive'], default='notfound')
'notfound'
Getting the available transitions for an object
This function returns all possible transitions from all workflows in the
object’s workflow chain.
Let’s create a Batch. It should allow us to invoke two different transitions:
‘close’ and ‘cancel’:
>>> batch1 = api.create(portal.batches, "Batch", title="Test Batch")
>>> transitions = api.get_transitions_for(batch1)
>>> len(transitions)
2
The transitions are returned as a list of dictionaries. Since we cannot rely on
the order of dictionary keys, we will have to satisfy ourselves here with
checking that the two expected transitions are present in the return value:
>>> 'Close' in [t['title'] for t in transitions]
True
>>> 'Cancel' in [t['title'] for t in transitions]
True
Getting the creation date of an object
This function returns the creation date of a given object:
>>> created = api.get_creation_date(client)
>>> created
DateTime('...')
Getting the modification date of an object
This function returns the modification date of a given object:
>>> modified = api.get_modification_date(client)
>>> modified
DateTime('...')
Getting the review state of an object
This function returns the review state of a given object:
>>> review_state = api.get_review_status(client)
>>> review_state
'active'
It should also work for catalog brains:
>>> results = api.search({"portal_type": "Client", "UID": api.get_uid(client)})
>>> len(results)
1
>>> api.get_review_status(results[0]) == review_state
True
Getting the registered Catalogs of an Object
This function returns a list of all registered catalogs within the
archetype_tool for a given portal_type or object:
>>> api.get_catalogs_for(client)
[...]
It also supports the portal_type as a parameter:
>>> api.get_catalogs_for("Analysis")
[...]
Transitioning an Object
This function performs a workflow transition and returns the object:
>>> client = api.do_transition_for(client, "deactivate")
>>> api.is_active(client)
False
>>> client = api.do_transition_for(client, "activate")
>>> api.is_active(client)
True
Getting inactive/cancellation state of different workflows
There are two workflows allowing an object to be set inactive. We provide
the is_active function to return False if an item is set inactive with either
of these workflows.
In the search() test above, the is_active function’s handling of brain states
is tested. Here, I just want to test if object states are handled correctly.
For setup types, we use senaite_deactivable_type_workflow:
>>> method1 = api.create(portal.methods, "Method", title="Test Method")
>>> api.is_active(method1)
True
>>> method1 = api.do_transition_for(method1, 'deactivate')
>>> api.is_active(method1)
False
For transactional types, senaite_cancellable_type_workflow is used:
>>> maintenance_task = api.create(instrument1, "InstrumentMaintenanceTask", title="Maintenance Task for Instrument 1")
>>> api.is_active(maintenance_task)
True
>>> maintenance_task = api.do_transition_for(maintenance_task, "cancel")
>>> api.is_active(maintenance_task)
False
But there are custom workflows that can also provide cancel transition, like
senaite_batch_workflow, to which Batch type is bound:
>>> batch1 = api.create(portal.batches, "Batch", title="Test Batch")
>>> api.is_active(batch1)
True
>>> batch1 = api.do_transition_for(batch1, 'cancel')
>>> api.is_active(batch1)
False
Getting the granted Roles for a certain Permission on an Object
This function returns a list of Roles, which are granted the given Permission
for the passed in object:
>>> api.get_roles_for_permission("Modify portal content", portal)
['LabClerk', 'LabManager', 'Manager', 'Owner']
>>> api.get_roles_for_permission("Modify portal content", bika_setup)
['LabClerk', 'LabManager', 'Manager']
Checking if an Object is Versionable
- NOTE: Versioning is outdated!
- This code will be removed as soon as we drop the HistoryAwareReferenceField
reference between Calculation and Analysis.
Instruments are not versionable:
>>> api.is_versionable(instrument1)
False
Calculations are versionable:
>>> calculations = bika_setup.bika_calculations
>>> calc = api.create(calculations, "Calculation", title="Calculation 1")
>>> api.is_versionable(calc)
True
Getting the Version of an Object
This function returns the version as an integer:
>>> api.get_version(calc)
0
Calling processForm bumps the version:
>>> calc.processForm()
>>> api.get_version(calc)
1
Getting a Browser View
Getting a browser view is a common task in SENAITE LIMS:
>>> api.get_view("plone")
<Products.Five.browser.metaconfigure.Plone object at 0x...>
>>> api.get_view("workflow_action")
<Products.Five.browser.metaconfigure.WorkflowActionHandler object at 0x...>
Getting the Request
This function will return the global request object:
>>> api.get_request()
<HTTPRequest, URL=http://nohost>
Getting a Group
Users in SENAITE LIMS are managed in groups. A common group is the Clients group,
where all users of client contacts are grouped.
This function gives easy access and is also idempotent:
>>> clients_group = api.get_group("Clients")
>>> clients_group
<GroupData at /plone/portal_groupdata/Clients used for /plone/acl_users/source_groups>
>>> api.get_group(clients_group)
<GroupData at /plone/portal_groupdata/Clients used for /plone/acl_users/source_groups>
Non-existing groups are not found:
>>> api.get_group("NonExistingGroup")
Getting a User
Users can be fetched by their user id. The function is idempotent and handles
user objects as well:
>>> from plone.app.testing import TEST_USER_ID
>>> user = api.get_user(TEST_USER_ID)
>>> user
<Products.PlonePAS.tools.memberdata.MemberData object at 0x...>
>>> api.get_user(api.get_user(TEST_USER_ID))
<Products.PlonePAS.tools.memberdata.MemberData object at 0x...>
Non-existing users are not found:
>>> api.get_user("NonExistingUser")
Getting User Properties
User properties, like the email or full name, are stored as user properties.
This means that they are not on the user object. This function retrieves these
properties for you:
>>> properties = api.get_user_properties(TEST_USER_ID)
>>> sorted(properties.items())
[('description', ''), ('email', ''), ('error_log_update', 0.0), ('ext_editor', False), ...]
>>> sorted(api.get_user_properties(user).items())
[('description', ''), ('email', ''), ('error_log_update', 0.0), ('ext_editor', False), ...]
An empty property dict is returned if no user could be found:
>>> api.get_user_properties("NonExistingUser")
{}
>>> api.get_user_properties(None)
{}
Getting Users by their Roles
>>> from operator import methodcaller
Roles in SENAITE LIMS are basically a name for one or more permissions. For
example, a LabManager describes a role which is granted the most permissions.
So first I’ll add some users with some different roles:
>>> for user in [{'username': 'labmanager_1', 'roles': ['LabManager']},
... {'username': 'labmanager_2', 'roles': ['LabManager']},
... {'username': 'sampler_1', 'roles': ['Sampler']},
... {'username': 'client_1', 'roles': ['Client']}]:
... member = portal.portal_registration.addMember(
... user['username'], user['username'],
... properties={'username': user['username'],
... 'email': user['username'] + "@example.com",
... 'fullname': user['username']})
... setRoles(portal, user['username'], user['roles'])
... # If user is a LabManager, add Owner local role on clients folder
... # TODO ask @ramonski, is this still required?
... if 'LabManager' in user['roles']:
... portal.clients.manage_setLocalRoles(user['username'], ['Owner'])
To see which users are granted a certain role, you can use this function:
>>> labmanagers = api.get_users_by_roles(["LabManager"])
>>> sorted(labmanagers, key=methodcaller('getId'))
[<PloneUser 'labmanager_1'>, <PloneUser 'labmanager_2'>]
A single value can also be passed into this function:
>>> sorted(api.get_users_by_roles("Sampler"), key=methodcaller('getId'))
[<PloneUser 'sampler_1'>]
Getting the Current User
Getting the current logged in user:
>>> api.get_current_user()
<Products.PlonePAS.tools.memberdata.MemberData object at 0x...
Creating a Cache Key
This function creates a good cache key for a generic object or brain:
>>> key1 = api.get_cache_key(client)
>>> key1
'Client-client-1-...'
NOTE: Function will be deleted in senaite.core 3.0.0
SENAITE Cache Key decorator
This decorator can be used for plone.memoize cache decorators in classes.
The decorator expects that the first argument is the class instance (self) and
the second argument a brain or object:
>>> from plone.memoize.volatile import cache
>>> class SENAITEClass(object):
... @cache(api.bika_cache_key_decorator)
... def get_very_expensive_calculation(self, obj):
... print "very expensive calculation"
... return "calculation result"
Calling the (expensive) method of the class does the calculation just once:
>>> instance = SENAITEClass()
>>> instance.get_very_expensive_calculation(client)
very expensive calculation
'calculation result'
>>> instance.get_very_expensive_calculation(client)
'calculation result'
The decorator can also handle brains:
>>> from senaite.core.catalog import CLIENT_CATALOG
>>> instance = SENAITEClass()
>>> cat = api.get_tool(CLIENT_CATALOG)
>>> brain = cat(portal_type="Client")[0]
>>> instance.get_very_expensive_calculation(brain)
very expensive calculation
'calculation result'
>>> instance.get_very_expensive_calculation(brain)
'calculation result'
NOTE: Function will be deleted in senaite.core 3.0.0
ID Normalizer
Normalizes a string to be usable as a system ID:
>>> api.normalize_id("My new ID")
'my-new-id'
>>> api.normalize_id("Really/Weird:Name;")
'really-weird-name'
>>> api.normalize_id(None)
Traceback (most recent call last):
[...]
APIError: Type of argument must be string, found '<type 'NoneType'>'
File Normalizer
Normalizes a string to be usable as a file name:
>>> api.normalize_filename("My new ID")
'My new ID'
>>> api.normalize_filename("Really/Weird:Name;")
'Really-Weird-Name'
>>> api.normalize_filename(None)
Traceback (most recent call last):
[...]
APIError: Type of argument must be string, found '<type 'NoneType'>'
Check if an UID is valid
Checks if an UID is a valid 23 alphanumeric uid:
>>> api.is_uid("ajw2uw9")
False
>>> api.is_uid(None)
False
>>> api.is_uid('0e1dfc3d10d747bf999948a071bc161e')
True
Per convention we assume “0” is the uid for portal object (PloneSite):
Checks if an UID is a valid 23 alphanumeric uid and with a brain:
>>> api.is_uid("ajw2uw9", validate=True)
False
>>> api.is_uid(None, validate=True)
False
>>> api.is_uid("", validate=True)
False
>>> api.is_uid('0e1dfc3d10d747bf999948a071bc161e', validate=True)
False
>>> api.is_uid("0", validate=True)
True
>>> asfolder = self.portal.bika_setup.bika_analysisservices
>>> serv = api.create(asfolder, "AnalysisService", title="AS test")
>>> serv.setKeyword("as_test")
>>> uid = serv.UID()
>>> api.is_uid(uid, validate=True)
True
Check if a Date is valid
Do some imports first:
>>> from datetime import datetime
>>> from DateTime import DateTime
Checks if a DateTime is valid:
>>> now = DateTime()
>>> api.is_date(now)
True
>>> now = datetime.now()
>>> api.is_date(now)
True
>>> now = DateTime(now)
>>> api.is_date(now)
True
>>> api.is_date(None)
False
>>> api.is_date('2018-04-23')
False
Try conversions to Date
Try to convert to DateTime:
>>> now = DateTime()
>>> zpdt = api.to_date(now)
>>> zpdt.ISO8601() == now.ISO8601()
True
>>> now = datetime.now()
>>> zpdt = api.to_date(now)
>>> pydt = zpdt.asdatetime()
Note that here, for the comparison between dates, we convert DateTime to python
datetime, cause DateTime.strftime() is broken for timezones (always looks at
system time zone, ignores the timezone and offset of the DateTime instance
itself):
>>> pydt.strftime('%Y-%m-%dT%H:%M:%S') == now.strftime('%Y-%m-%dT%H:%M:%S')
True
Try the same, but with utcnow() instead:
>>> now = datetime.utcnow()
>>> zpdt = api.to_date(now)
>>> pydt = zpdt.asdatetime()
>>> pydt.strftime('%Y-%m-%dT%H:%M:%S') == now.strftime('%Y-%m-%dT%H:%M:%S')
True
Now we convert just a string formatted date:
>>> strd = "2018-12-01 17:50:34"
>>> zpdt = api.to_date(strd)
>>> zpdt.ISO8601()
'2018-12-01T17:50:34'
Now we convert just a string formatted date, but with timezone:
>>> strd = "2018-12-01 17:50:34 GMT+1"
>>> zpdt = api.to_date(strd)
>>> zpdt.ISO8601()
'2018-12-01T17:50:34+01:00'
We also check a bad date here (note the month is 13):
>>> strd = "2018-13-01 17:50:34"
>>> zpdt = api.to_date(strd)
>>> api.is_date(zpdt)
False
And with European format:
>>> strd = "01.12.2018 17:50:34"
>>> zpdt = api.to_date(strd)
>>> zpdt.ISO8601()
'2018-12-01T17:50:34'
>>> zpdt = api.to_date(None)
>>> zpdt is None
True
Use a string formatted date as fallback:
>>> strd = "2018-13-01 17:50:34"
>>> default_date = "2018-01-01 19:30:30"
>>> zpdt = api.to_date(strd, default_date)
>>> zpdt.ISO8601()
'2018-01-01T19:30:30'
Use a DateTime object as fallback:
>>> strd = "2018-13-01 17:50:34"
>>> default_date = "2018-01-01 19:30:30"
>>> default_date = api.to_date(default_date)
>>> zpdt = api.to_date(strd, default_date)
>>> zpdt.ISO8601() == default_date.ISO8601()
True
Use a datetime object as fallback:
>>> strd = "2018-13-01 17:50:34"
>>> default_date = datetime.now()
>>> zpdt = api.to_date(strd, default_date)
>>> dzpdt = api.to_date(default_date)
>>> zpdt.ISO8601() == dzpdt.ISO8601()
True
Use a non-conversionable value as fallback:
>>> strd = "2018-13-01 17:50:34"
>>> default_date = "something wrong here"
>>> zpdt = api.to_date(strd, default_date)
>>> zpdt is None
True
Check if floatable
>>> api.is_floatable(None)
False
>>> api.is_floatable("")
False
>>> api.is_floatable("31")
True
>>> api.is_floatable("31.23")
True
>>> api.is_floatable("-13")
True
>>> api.is_floatable("12,35")
False
Convert to a float number
>>> api.to_float("2")
2.0
>>> api.to_float("2.234")
2.234
With default fallback:
>>> api.to_float(None, 2)
2.0
>>> api.to_float(None, "2")
2.0
>>> api.to_float("", 2)
2.0
>>> api.to_float("", "2")
2.0
>>> api.to_float(2.1, 2)
2.1
>>> api.to_float("2.1", 2)
2.1
>>> api.to_float("2.1", "2")
2.1
Convert to an int number
With default fallback:
>>> api.to_int(None, 2)
2
>>> api.to_int(None, "2")
2
>>> api.to_int("as", None) is None
True
>>> api.to_int("as", "2")
2
Convert float to string
Values below zero get converted by the float class to the exponential notation, e.g.
>>> value = "0.000000000123"
>>> float_value = float(value)
>>> other_value = "0.0000001"
>>> other_float_value = float(other_value)
>>> other_float_value
1e-07
Converting it back to a string would keep this notation:
>>> str(float_value)
'1.23e-10'
>>> str(other_float_value)
'1e-07'
The function float_to_string converts the float value without exponential notation:
>>> api.float_to_string(float_value)
'0.000000000123'
>>> api.float_to_string(float_value) == value
True
Passing in the string value should convert it to the same value:
>>> api.float_to_string(value) == value
True
When the fraction contains more digits, it will retain them all and takes care of the trailing zero:
>>> new_value = 0.000000000123777
>>> api.float_to_string(new_value)
'0.000000000123777'
Converting integers work as well:
>>> int_value = 123
>>> api.float_to_string(int_value)
'123'
The function also ensures that floatable string values remain unchanged:
>>> str_value = "1.99887766554433221100"
>>> api.float_to_string(str_value) == str_value
True
When a scientific notation is passed in, the function will return the decimals:
>>> api.float_to_string(1e-1)
'0.1'
>>> api.float_to_string(1e0)
'1'
>>> api.float_to_string(1e1)
'10'
>>> api.float_to_string(1e-16)
'0.0000000000000001'
>>> api.float_to_string(1e+16)
'10000000000000000'
>>> api.float_to_string(1e16)
'10000000000000000'
>>> api.float_to_string(-1e-1)
'-0.1'
>>> api.float_to_string(-1e+1)
'-10'
Convert to minutes
>>> api.to_minutes(hours=1)
60
>>> api.to_minutes(hours=1.5, minutes=30)
120
>>> api.to_minutes(hours=0, minutes=0, seconds=0)
0
>>> api.to_minutes(minutes=120)
120
>>> api.to_minutes(hours="1", minutes="120", seconds="120")
182
>>> api.to_minutes(days=3)
4320
>>> api.to_minutes(minutes=122.4567)
122
>>> api.to_minutes(minutes=122.4567, seconds=6)
123
>>> api.to_minutes(minutes=122.4567, seconds=6, round_to_int=False)
122.55669999999999
Get a registry record
Fetch a value of a registry record:
>>> key = "Products.CMFPlone.i18nl10n.override_dateformat.Enabled"
>>> api.get_registry_record(key)
False
If the record is not found, the default is returned:
>>> key = "non.existing.key"
>>> api.get_registry_record(key, default="NX_KEY")
'NX_KEY'
Create a display list
Static display lists, can look up on either side of the dict, and get them in
sorted order. They are used in selection widgets.
The function can handle a list of key->value pairs:
>>> pairs = [["a", "A"], ["b", "B"]]
>>> api.to_display_list(pairs)
<DisplayList [('', ''), ('a', 'A'), ('b', 'B')] at ...>
It can also handle a single pair:
>>> pairs = ["z", "Z"]
>>> api.to_display_list(pairs)
<DisplayList [('', ''), ('z', 'Z')] at ...>
It can also handle a single string:
>>> api.to_display_list("x")
<DisplayList [('', ''), ('x', 'x')] at ...>
It can be sorted either by key or by value:
>>> pairs = [["b", 10], ["a", 100]]
>>> api.to_display_list(pairs)
<DisplayList [('', ''), ('a', 100), ('b', 10)] at ...>
>>> api.to_display_list(pairs, sort_by="value")
<DisplayList [('b', 10), ('a', 100), ('', '')] at ...>
Converting a text to HTML
This function converts newline (n) escape sequences in plain text to <br/>
tags for HTML rendering.
The function can handle plain texts:
>>> text = "First\r\nSecond\r\nThird"
>>> api.text_to_html(text)
'<p>First\r<br/>Second\r<br/>Third</p>'
Unicodes texts work as well:
>>> text = u"Ä\r\nÖ\r\nÜ"
>>> api.text_to_html(text)
'<p>\xc3\x83\xc2\x84\r<br/>\xc3\x83\xc2\x96\r<br/>\xc3\x83\xc2\x9c</p>'
The outer <p> wrap can be also omitted:
>>> text = "One\r\nTwo"
>>> api.text_to_html(text, wrap=None)
'One\r<br/>Two'
Or changed to another tag:
>>> text = "One\r\nTwo"
>>> api.text_to_html(text, wrap="div")
'<div>One\r<br/>Two</div>'
Empty strings are returned unchanged:
>>> text = ""
>>> api.text_to_html(text, wrap="div")
''
Converting a string to UTF8
This function encodes unicode strings to UTF8.
In this test we use the German letter ä which is in unicode u’xe4’:
>>> api.to_utf8("ä")
'\xc3\xa4'
>>> api.to_utf8("\xc3\xa4")
'\xc3\xa4'
>>> api.to_utf8(api.safe_unicode("ä"))
'\xc3\xa4'
>>> api.to_utf8(u"\xe4")
'\xc3\xa4'
Unsupported types return either the default value or fail:
>>> api.to_utf8(object())
Traceback (most recent call last):
...
APIError: Expected string type, got '<type 'object'>'
>>> api.to_utf8(object(), default="")
''
Check if an object is a string
This function checks if the given object is a string type.
>>> api.is_string("Hello World")
True
>>> api.is_string(u"Hello World")
True
>>> api.is_string(r"Hello World")
True
>>> api.is_string("")
True
>>> api.is_string(None)
False
>>> api.is_string(object)
False
Check if an object is temporary
This function checks if the given object is temporary. This is the object is
being created and is not yet ready.
Check the client we created earlier is not temporary:
>>> api.is_temporary(client)
False
Check with a step-by-step DX content type:
>>> import uuid
>>> from bika.lims.utils import tmpID
>>> from zope.component import getUtility
>>> from zope.component.interfaces import IFactory
>>> from zope.event import notify
>>> from zope.lifecycleevent import ObjectCreatedEvent
>>> portal_types = api.get_tool("portal_types")
>>> fti = portal_types.getTypeInfo("SampleContainer")
>>> factory = getUtility(IFactory, fti.factory)
>>> tmp_obj_id = tmpID()
>>> tmp_obj = factory(tmp_obj_id)
>>> tmp_obj._setPortalTypeName(fti.getId())
>>> api.is_temporary(tmp_obj)
True
>>> tmp_obj.title = u'Test container'
>>> notify(ObjectCreatedEvent(tmp_obj))
>>> api.is_temporary(tmp_obj)
True
The DX object is no longer temporary when is assigned to the parent folder and
the the definitive id is set:
>>> folder = api.get_setup().sample_containers
>>> uid = folder._setObject(tmp_obj_id, tmp_obj)
>>> api.is_temporary(tmp_obj)
True
>>> tmp_obj_id = "non-uid-temp-id"
>>> tmp_obj = folder._getOb(tmp_obj.getId())
>>> tmp_obj.id = tmp_obj_id
>>> api.is_temporary(tmp_obj)
False
But even if we don’t use a non-UID id as the temporary id on creation. System
will still consider the object as temporary until is assigned to its parent
folder:
>>> tmp_obj = factory(tmp_obj_id)
>>> tmp_obj._setPortalTypeName(fti.getId())
>>> api.is_temporary(tmp_obj)
True
>>> tmp_obj.title = u'Test container 2'
>>> notify(ObjectCreatedEvent(tmp_obj))
>>> api.is_temporary(tmp_obj)
True
>>> folder = api.get_setup().sample_containers
>>> uid = folder._setObject(tmp_obj_id, tmp_obj)
>>> api.is_temporary(tmp_obj)
True
>>> tmp_obj = folder._getOb(tmp_obj.getId())
>>> api.is_temporary(tmp_obj)
False
On the other hand, an object with a UID id is always considered as temporary:
>>> tmp_obj.id = uuid.uuid4().hex
>>> api.is_temporary(tmp_obj)
True
If we use api.create, the object returned is not temporary:
>>> obj = api.create(setup.sample_containers, "SampleContainer", title="Another sample container")
>>> api.is_temporary(obj)
False
AT content types are considered temporary while being created inside
portal_factory:
>>> tmp_path = "portal_factory/Client/{}".format(tmpID())
>>> tmp_client = portal.clients.restrictedTraverse(tmp_path)
>>> api.is_temporary(tmp_client)
True
Copying content
This function helps to do it right and copies an existing content for you.
Here we create a copy of the Client we created earlier:
>>> client.setTaxNumber('VAT12345')
>>> client2 = api.copy_object(client, title="Test Client 2")
>>> client2
<Client at /plone/clients/client-2>
>>> client2.Title()
'Test Client 2'
>>> client2.getTaxNumber()
'VAT12345'
We can override source values on copy as well:
>>> client.setBankName('Peanuts Bank Ltd')
>>> client3 = api.copy_object(client, title="Test Client 3",
... BankName="Nuts Bank Ltd")
>>> client3
<Client at /plone/clients/client-3>
>>> client3.Title()
'Test Client 3'
>>> client3.getTaxNumber()
'VAT12345'
>>> client3.getBankName()
'Nuts Bank Ltd'
We can create a copy in a container other than source’s:
>>> sample_points = self.portal.bika_setup.bika_samplepoints
>>> sample_point = api.create(sample_points, "SamplePoint", title="Test")
>>> sample_point
<SamplePoint at /plone/bika_setup/bika_samplepoints/samplepoint-1>
>>> sample_point_copy = api.copy_object(sample_point, container=client3)
>>> sample_point_copy
<SamplePoint at /plone/clients/client-3/samplepoint-2>
We can even create a copy to a different type:
>>> suppliers = self.portal.bika_setup.bika_suppliers
>>> supplier = api.copy_object(client, container=suppliers,
... portal_type="Supplier", title="Supplier 1")
>>> supplier
<Supplier at /plone/bika_setup/bika_suppliers/supplier-1>
>>> supplier.Title()
'Supplier 1'
>>> supplier.getTaxNumber()
'VAT12345'
>>> supplier.getBankName()
'Peanuts Bank Ltd'
It works for Dexterity types as well:
>>> sample_containers = self.portal.bika_setup.sample_containers
>>> sample_container = api.create(sample_containers, "SampleContainer",
... title="Source Sample Container",
... description="Sample container to test",
... capacity="100 ml")
>>> sample_container.Title()
'Source Sample Container'
>>> sample_container.Description()
'Sample container to test'
>>> sample_container.getCapacity()
'100 ml'
>>> sample_container_copy = api.copy_object(sample_container,
... title="Target Sample Container",
... capacity="50 ml")
>>> sample_container_copy.Title()
'Target Sample Container'
>>> sample_container_copy.Description()
'Sample container to test'
>>> sample_container_copy.getCapacity()
'50 ml'
Parse to JSON
>>> api.parse_json('["a", "b", "c"]')
[u'a', u'b', u'c']
>>> obj = api.parse_json('{"a": 1, "b": 2, "c": 3}')
>>> [obj[key] for key in 'abc']
[1, 2, 3]
>>> obj = api.parse_json('{"a": 1, "b": ["one", "two", 3], "c": 3}')
>>> [obj[key] for key in 'abc']
[1, [u'one', u'two', 3], 3]
>>> api.parse_json("ko")
''
>>> api.parse_json("ko", default="ok")
'ok'
Convert to list
>>> api.to_list(None)
[None]
>>> api.to_list(["a", "b", "c"])
['a', 'b', 'c']
>>> api.to_list('["a", "b", "c"]')
[u'a', u'b', u'c']
>>> api.to_list("a, b, c")
['a, b, c']
>>> api.to_list([{"a": 1}, {"b": 2}, {"c": 3}])
[{'a': 1}, {'b': 2}, {'c': 3}]
>>> api.to_list('[{"a": 1}, {"b": 2}, {"c": 3}]')
[{u'a': 1}, {u'b': 2}, {u'c': 3}]
>>> api.to_list({"a": 1})
[{'a': 1}]
>>> api.to_list('{"a": 1, "b": ["one", "two", 3], "c": 3}')
['{"a": 1, "b": ["one", "two", 3], "c": 3}']
>>> api.to_list(["[1, 2, 3]", "b", "c"])
['[1, 2, 3]', 'b', 'c']
>>> api.to_list('["[1, 2, 3]", "b", "c"]')
[u'[1, 2, 3]', u'b', u'c']
API Analysis
The api_analysis provides single functions for single purposes especifically
related with analyses.
Running this test from the buildout directory:
bin/test test_textual_doctests -t API_analysis
Test Setup
Needed Imports:
>>> import re
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.api.analysis import get_formatted_interval
>>> from bika.lims.api.analysis import is_out_of_range
>>> from bika.lims.content.analysisrequest import AnalysisRequest
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.utils import tmpID
>>> from bika.lims.workflow import doActionFor
>>> from bika.lims.workflow import getCurrentState
>>> from bika.lims.workflow import getAllowedTransitions
>>> from DateTime import DateTime
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from plone.app.testing import setRoles
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), DuplicateVariation="0.5")
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID(), DuplicateVariation="0.5")
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID(), DuplicateVariation="0.5")
>>> Mg = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg", Price="20", Category=category.UID(), DuplicateVariation="0.5")
>>> service_uids = [api.get_uid(an) for an in [Cu, Fe, Au, Mg]]
Create an Analysis Specification for Water:
>>> sampletype_uid = api.get_uid(sampletype)
>>> rr1 = {"keyword": "Au", "min": "-5", "max": "5", "warn_min": "-5.5", "warn_max": "5.5"}
>>> rr2 = {"keyword": "Cu", "min": "10", "max": "20", "warn_min": "9.5", "warn_max": "20.5"}
>>> rr3 = {"keyword": "Fe", "min": "0", "max": "10", "warn_min": "-0.5", "warn_max": "10.5"}
>>> rr4 = {"keyword": "Mg", "min": "10", "max": "10"}
>>> rr = [rr1, rr2, rr3, rr4]
>>> specification = api.create(bikasetup.bika_analysisspecs, "AnalysisSpec", title="Lab Water Spec", SampleType=sampletype_uid, ResultsRange=rr)
>>> spec_uid = api.get_uid(specification)
Create a Reference Definition for blank:
>>> blankdef = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Blank definition", Blank=True)
>>> blank_refs = [{'uid': Au.UID(), 'result': '0', 'min': '0', 'max': '0'},]
>>> blankdef.setReferenceResults(blank_refs)
And for control:
>>> controldef = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Control definition")
>>> control_refs = [{'uid': Au.UID(), 'result': '10', 'min': '9.99', 'max': '10.01'},
... {'uid': Cu.UID(), 'result': '-0.9','min': '-1.08', 'max': '-0.72'},]
>>> controldef.setReferenceResults(control_refs)
>>> blank = api.create(supplier, "ReferenceSample", title="Blank",
... ReferenceDefinition=blankdef,
... Blank=True, ExpiryDate=date_future,
... ReferenceResults=blank_refs)
>>> control = api.create(supplier, "ReferenceSample", title="Control",
... ReferenceDefinition=controldef,
... Blank=False, ExpiryDate=date_future,
... ReferenceResults=control_refs)
Create an Analysis Request:
>>> values = {
... 'Client': api.get_uid(client),
... 'Contact': api.get_uid(contact),
... 'DateSampled': date_now,
... 'SampleType': sampletype_uid,
... 'Specification': spec_uid,
... 'Priority': '1',
... }
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> success = doActionFor(ar, 'receive')
Create a new Worksheet and add the analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> analyses = map(api.get_object, ar.getAnalyses())
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
Add a duplicate for Cu:
>>> position = worksheet.get_slot_position(ar, 'a')
>>> duplicates = worksheet.addDuplicateAnalyses(position)
>>> duplicates.sort(key=lambda analysis: analysis.getKeyword(), reverse=False)
Add a blank and a control:
>>> blanks = worksheet.addReferenceAnalyses(blank, service_uids)
>>> blanks.sort(key=lambda analysis: analysis.getKeyword(), reverse=False)
>>> controls = worksheet.addReferenceAnalyses(control, service_uids)
>>> controls.sort(key=lambda analysis: analysis.getKeyword(), reverse=False)
Check if results are out of range
First, get the analyses from slot 1 and sort them asc:
>>> analyses = worksheet.get_analyses_at(1)
>>> analyses.sort(key=lambda analysis: analysis.getKeyword(), reverse=False)
Set results for analysis Au (min: -5, max: 5, warn_min: -5.5, warn_max: 5.5):
>>> au_analysis = analyses[0]
>>> au_analysis.setResult(2)
>>> is_out_of_range(au_analysis)
(False, False)
>>> au_analysis.setResult(-2)
>>> is_out_of_range(au_analysis)
(False, False)
>>> au_analysis.setResult(-5)
>>> is_out_of_range(au_analysis)
(False, False)
>>> au_analysis.setResult(5)
>>> is_out_of_range(au_analysis)
(False, False)
>>> au_analysis.setResult(10)
>>> is_out_of_range(au_analysis)
(True, True)
>>> au_analysis.setResult(-10)
>>> is_out_of_range(au_analysis)
(True, True)
Results in shoulders?:
>>> au_analysis.setResult(-5.2)
>>> is_out_of_range(au_analysis)
(True, False)
>>> au_analysis.setResult(-5.5)
>>> is_out_of_range(au_analysis)
(True, False)
>>> au_analysis.setResult(-5.6)
>>> is_out_of_range(au_analysis)
(True, True)
>>> au_analysis.setResult(5.2)
>>> is_out_of_range(au_analysis)
(True, False)
>>> au_analysis.setResult(5.5)
>>> is_out_of_range(au_analysis)
(True, False)
>>> au_analysis.setResult(5.6)
>>> is_out_of_range(au_analysis)
(True, True)
Check if results for duplicates are out of range
Get the first duplicate analysis that comes from Au:
>>> duplicate = duplicates[0]
A Duplicate will be considered out of range if its result does not match with
the result set to the analysis that was duplicated from, with the Duplicate
Variation in % as the margin error. The Duplicate Variation assigned in the
Analysis Service Au is 0.5%:
>>> dup_variation = au_analysis.getDuplicateVariation()
>>> dup_variation = api.to_float(dup_variation)
>>> dup_variation
0.5
Set an in-range result (between -5 and 5) for routine analysis and check all
variants on it’s duplicate. Given that the duplicate variation is 0.5, the
valid range for the duplicate must be Au +-0.5%:
>>> result = 2.0
>>> au_analysis.setResult(result)
>>> is_out_of_range(au_analysis)
(False, False)
>>> duplicate.setResult(result)
>>> is_out_of_range(duplicate)
(False, False)
>>> dup_min_range = result - (result*(dup_variation/100))
>>> duplicate.setResult(dup_min_range)
>>> is_out_of_range(duplicate)
(False, False)
>>> duplicate.setResult(dup_min_range - 0.5)
>>> is_out_of_range(duplicate)
(True, True)
>>> dup_max_range = result + (result*(dup_variation/100))
>>> duplicate.setResult(dup_max_range)
>>> is_out_of_range(duplicate)
(False, False)
>>> duplicate.setResult(dup_max_range + 0.5)
>>> is_out_of_range(duplicate)
(True, True)
Set an out-of-range result, but within shoulders, for routine analysis and check
all variants on it’s duplicate. Given that the duplicate variation is 0.5, the
valid range for the duplicate must be Au +-0.5%:
>>> result = 5.5
>>> au_analysis.setResult(result)
>>> is_out_of_range(au_analysis)
(True, False)
>>> duplicate.setResult(result)
>>> is_out_of_range(duplicate)
(False, False)
>>> dup_min_range = result - (result*(dup_variation/100))
>>> duplicate.setResult(dup_min_range)
>>> is_out_of_range(duplicate)
(False, False)
>>> duplicate.setResult(dup_min_range - 0.5)
>>> is_out_of_range(duplicate)
(True, True)
>>> dup_max_range = result + (result*(dup_variation/100))
>>> duplicate.setResult(dup_max_range)
>>> is_out_of_range(duplicate)
(False, False)
>>> duplicate.setResult(dup_max_range + 0.5)
>>> is_out_of_range(duplicate)
(True, True)
Set an out-of-range and out-of-shoulders result, for routine analysis and check
all variants on it’s duplicate. Given that the duplicate variation is 0.5, the
valid range for the duplicate must be Au +-0.5%:
>>> result = -7.0
>>> au_analysis.setResult(result)
>>> is_out_of_range(au_analysis)
(True, True)
>>> duplicate.setResult(result)
>>> is_out_of_range(duplicate)
(False, False)
>>> dup_min_range = result - (abs(result)*(dup_variation/100))
>>> duplicate.setResult(dup_min_range)
>>> is_out_of_range(duplicate)
(False, False)
>>> duplicate.setResult(dup_min_range - 0.5)
>>> is_out_of_range(duplicate)
(True, True)
>>> dup_max_range = result + (abs(result)*(dup_variation/100))
>>> duplicate.setResult(dup_max_range)
>>> is_out_of_range(duplicate)
(False, False)
>>> duplicate.setResult(dup_max_range + 0.5)
>>> is_out_of_range(duplicate)
(True, True)
Check if results for Reference Analyses (blanks + controls) are out of range
Reference Analyses (controls and blanks) do not use the result ranges defined in
the specifications, rather they use the result range defined in the Reference
Sample they have been generated from. In turn, the result ranges defined in
Reference Samples can be set manually or acquired from the Reference Definition
they might be associated with. Another difference from routine analyses is that
reference analyses don’t expect a valid range, rather a discrete value, so
shoulders are built based on % error.
Blank Analyses
The first blank analysis corresponds to Au:
For Au blank, as per the reference definition used above, the expected result
is 0 +/- 0.1%. Since the expected result is 0, no shoulders will be considered
regardless of the % of error. Thus, result will always be “out-of-shoulders”
when out of range.
>>> au_blank.setResult(0.0)
>>> is_out_of_range(au_blank)
(False, False)
>>> au_blank.setResult("0")
>>> is_out_of_range(au_blank)
(False, False)
>>> au_blank.setResult(0.0001)
>>> is_out_of_range(au_blank)
(True, True)
>>> au_blank.setResult("0.0001")
>>> is_out_of_range(au_blank)
(True, True)
>>> au_blank.setResult(-0.0001)
>>> is_out_of_range(au_blank)
(True, True)
>>> au_blank.setResult("-0.0001")
>>> is_out_of_range(au_blank)
(True, True)
Control Analyses
The first control analysis corresponds to Au:
>>> au_control = controls[0]
For Au control, as per the reference definition used above, the expected
result is 10 +/- 0.1% = 10 +/- 0.01
First, check for in-range values:
>>> au_control.setResult(10)
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult(10.0)
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult("10")
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult("10.0")
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult(9.995)
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult("9.995")
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult(10.005)
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult("10.005")
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult(9.99)
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult("9.99")
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult(10.01)
>>> is_out_of_range(au_control)
(False, False)
>>> au_control.setResult("10.01")
>>> is_out_of_range(au_control)
(False, False)
Now, check for out-of-range results:
>>> au_control.setResult(9.98)
>>> is_out_of_range(au_control)
(True, True)
>>> au_control.setResult("9.98")
>>> is_out_of_range(au_control)
(True, True)
>>> au_control.setResult(10.011)
>>> is_out_of_range(au_control)
(True, True)
>>> au_control.setResult("10.011")
>>> is_out_of_range(au_control)
(True, True)
And do the same with the control for Cu that expects -0.9 +/- 20%:
>>> cu_control = controls[1]
First, check for in-range values:
>>> cu_control.setResult(-0.9)
>>> is_out_of_range(cu_control)
(False, False)
>>> cu_control.setResult("-0.9")
>>> is_out_of_range(cu_control)
(False, False)
>>> cu_control.setResult(-1.08)
>>> is_out_of_range(cu_control)
(False, False)
>>> cu_control.setResult("-1.08")
>>> is_out_of_range(cu_control)
(False, False)
>>> cu_control.setResult(-1.07)
>>> is_out_of_range(cu_control)
(False, False)
>>> cu_control.setResult("-1.07")
>>> is_out_of_range(cu_control)
(False, False)
>>> cu_control.setResult(-0.72)
>>> is_out_of_range(cu_control)
(False, False)
>>> cu_control.setResult("-0.72")
>>> is_out_of_range(cu_control)
(False, False)
>>> cu_control.setResult(-0.73)
>>> is_out_of_range(cu_control)
(False, False)
>>> cu_control.setResult("-0.73")
>>> is_out_of_range(cu_control)
(False, False)
Now, check for out-of-range results:
>>> cu_control.setResult(0)
>>> is_out_of_range(cu_control)
(True, True)
>>> cu_control.setResult("0")
>>> is_out_of_range(cu_control)
(True, True)
>>> cu_control.setResult(-0.71)
>>> is_out_of_range(cu_control)
(True, True)
>>> cu_control.setResult("-0.71")
>>> is_out_of_range(cu_control)
(True, True)
>>> cu_control.setResult(-1.09)
>>> is_out_of_range(cu_control)
(True, True)
>>> cu_control.setResult("-1.09")
>>> is_out_of_range(cu_control)
(True, True)
Check if results are out of range when open interval is used
Set open interval for min and max from water specification
>>> ranges = specification.getResultsRange()
>>> for range in ranges:
... range['min_operator'] = 'gt'
... range['max_operator'] = 'lt'
>>> specification.setResultsRange(ranges)
We need to re-apply the Specification for the changes to take effect:
>>> ar.setSpecification(None)
>>> ar.setSpecification(specification)
First, get the analyses from slot 1 and sort them asc:
>>> analyses = worksheet.get_analyses_at(1)
>>> analyses.sort(key=lambda analysis: analysis.getKeyword(), reverse=False)
Set results for analysis Au (min: -5, max: 5, warn_min: -5.5, warn_max: 5.5):
>>> au_analysis = analyses[0]
>>> au_analysis.setResult(-5)
>>> is_out_of_range(au_analysis)
(True, False)
>>> au_analysis.setResult(5)
>>> is_out_of_range(au_analysis)
(True, False)
Check if results are out of range when left-open interval is used
Set left-open interval for min and max from water specification
>>> ranges = specification.getResultsRange()
>>> for range in ranges:
... range['min_operator'] = 'geq'
... range['max_operator'] = 'lt'
>>> specification.setResultsRange(ranges)
We need to re-apply the Specification for the changes to take effect:
>>> ar.setSpecification(None)
>>> ar.setSpecification(specification)
First, get the analyses from slot 1 and sort them asc:
>>> analyses = worksheet.get_analyses_at(1)
>>> analyses.sort(key=lambda analysis: analysis.getKeyword(), reverse=False)
Set results for analysis Au (min: -5, max: 5, warn_min: -5.5, warn_max: 5.5):
>>> au_analysis = analyses[0]
>>> au_analysis.setResult(-5)
>>> is_out_of_range(au_analysis)
(False, False)
>>> au_analysis.setResult(5)
>>> is_out_of_range(au_analysis)
(True, False)
Check if results are out of range when right-open interval is used
Set right-open interval for min and max from water specification
>>> ranges = specification.getResultsRange()
>>> for range in ranges:
... range['min_operator'] = 'gt'
... range['max_operator'] = 'leq'
>>> specification.setResultsRange(ranges)
We need to re-apply the Specification for the changes to take effect:
>>> ar.setSpecification(None)
>>> ar.setSpecification(specification)
First, get the analyses from slot 1 and sort them asc:
>>> analyses = worksheet.get_analyses_at(1)
>>> analyses.sort(key=lambda analysis: analysis.getKeyword(), reverse=False)
Set results for analysis Au (min: -5, max: 5, warn_min: -5.5, warn_max: 5.5):
>>> au_analysis = analyses[0]
>>> au_analysis.setResult(-5)
>>> is_out_of_range(au_analysis)
(True, False)
>>> au_analysis.setResult(5)
>>> is_out_of_range(au_analysis)
(False, False)
API Analysis Service
The api_analysisservice modue provides single functions for single purposes
especifically related with analyses services.
Running this test from the buildout directory:
bin/test test_textual_doctests -t API_AnalysisService
Test Setup
Needed Imports:
>>> from bika.lims import api
>>> from bika.lims.api.analysisservice import get_calculation_dependencies_for
>>> from bika.lims.api.analysisservice import get_calculation_dependants_for
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
>>> calculations = setup.bika_calculations
>>> analysisservices = setup.bika_analysisservices
Test user:
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Calculation Dependencies
Calculations can reference analysis services by Keyword in their Formula.
The referenced Analysis Services of the calculation are then dependencies of
the Analysis Service which has the Calculation assigned.
The dependencies of an Analysis Service can be retrieved by the API function
get_calculation_dependencies_for.
Create some Analysis Services with unique Keywords:
>>> Ca = api.create(analysisservices, "AnalysisService", title="Calcium", Keyword="Ca")
>>> Mg = api.create(analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg")
>>> Cu = api.create(analysisservices, "AnalysisService", title="Copper", Keyword="Cu")
>>> Fe = api.create(analysisservices, "AnalysisService", title="Iron", Keyword="Fe")
>>> Au = api.create(analysisservices, "AnalysisService", title="Aurum", Keyword="Au")
>>> Test1 = api.create(analysisservices, "AnalysisService", title="Calculated Test Service 1", Keyword="Test1")
>>> Test2 = api.create(analysisservices, "AnalysisService", title="Calculated Test Service 2", Keyword="Test2")
None of these services has so far any calculation dependencies:
>>> any(map(get_calculation_dependencies_for, [Ca, Mg, Cu, Fe, Au, Test1, Test2]))
False
Create a calculation, which references the Ca and Mg services, and link the
calculation to the Test1 service:
>>> calc1 = api.create(calculations, "Calculation", title="Calculation 1")
>>> calc1.setFormula("[Ca] + [Mg]")
>>> Test1.setCalculation(calc1)
The Test1 service depends now on Ca and Mg:
>>> deps = get_calculation_dependencies_for(Test1)
>>> sorted(map(lambda d: d.getKeyword(), deps.values()))
['Ca', 'Mg']
Now we add Fe to the calculation:
>>> calc1.setFormula("[Ca] + [Mg] + [Fe]")
The Test1 service depends now on Fe as well:
>>> deps = get_calculation_dependencies_for(Test1)
>>> sorted(map(lambda d: d.getKeyword(), deps.values()))
['Ca', 'Fe', 'Mg']
Now we create a calculation which doubles the results of the calculated Test1
service and assign it to the Test2 service:
>>> calc2 = api.create(calculations, "Calculation", title="Calculation 2")
>>> calc2.setFormula("[Test1] * 2")
>>> Test2.setCalculation(calc2)
The Test2 service depends now on the Test1 service:
>>> deps = get_calculation_dependencies_for(Test2)
>>> sorted(map(lambda d: d.getKeyword(), deps.values()))
['Test1']
Calculation Dependants
To get all Analysis Services which depend on a specific Analysis Service, the
API provides the function get_calculation_dependants_for.
The Analysis Service Test1 references Ca, Mg and Fe by its calculation:
>>> Test1.getCalculation().getFormula()
'[Ca] + [Mg] + [Fe]'
Therefore, the dependant service of Ca, Mg and Fe is Test1
>>> deps = get_calculation_dependants_for(Ca)
>>> sorted(map(lambda d: d.getKeyword(), deps.values()))
['Test1']
>>> deps = get_calculation_dependants_for(Mg)
>>> sorted(map(lambda d: d.getKeyword(), deps.values()))
['Test1']
>>> deps = get_calculation_dependants_for(Fe)
>>> sorted(map(lambda d: d.getKeyword(), deps.values()))
['Test1']
The Analysis Service Test2 doubles the calculated result from Test1:
>>> Test2.getCalculation().getFormula()
'[Test1] * 2'
Therefore, Test2 is a dependant of Test1:
>>> deps = get_calculation_dependants_for(Test1)
>>> sorted(map(lambda d: d.getKeyword(), deps.values()))
['Test2']
Checking edge cases
The assigned calculation of Test2 doubles the value of Test1:
>>> Test2.getCalculation().getFormula()
'[Test1] * 2'
But what happens when the calculation references Test2 as well?
>>> Test2.getCalculation().setFormula("[Test1] * [Test2]")
>>> Test2.getCalculation().getFormula()
'[Test1] * [Test2]'
Checking the dependants of Test2 should not cause an infinite recursion:
>>> deps = get_calculation_dependants_for(Test2)
>>> sorted(map(lambda d: d.getKeyword(), deps.values()))
[]
SENAITE Catalog API
The mail API provides a simple interface to manage catalogs
Running this test from the buildout directory:
bin/test test_textual_doctests -t API_catalog
Test Setup
Imports:
>>> from senaite.core.api import catalog as capi
Setup the test user
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Catalog indexes
Getting a list of all indexes:
>>> sample_catalog = capi.get_catalog(SAMPLE_CATALOG)
>>> indexes = capi.get_indexes(sample_catalog)
>>> "UID" in indexes
True
Adding a new index to the catalog:
>>> IDX = "my_fancy_index"
>>> capi.add_index(sample_catalog, IDX, "FieldIndex")
True
>>> IDX in capi.get_indexes(sample_catalog)
True
>>> index = capi.get_index(sample_catalog, IDX)
>>> index.__class__
<class 'Products.PluginIndexes.FieldIndex.FieldIndex.FieldIndex'>
Reindexing the new index:
>>> capi.reindex_index(sample_catalog, IDX)
True
Removing an index from the catalog:
>>> capi.del_index(sample_catalog, IDX)
True
>>> IDX in capi.get_indexes(sample_catalog)
False
Adding a ZCTextIndex requires a ZCLexicon lexicon.
Therefore, add_zc_text_index takes care of it:
>>> LEXICON = "my_fancy_lexicon"
>>> capi.add_zc_text_index(sample_catalog, IDX, lex_id=LEXICON)
True
>>> index = capi.get_index(sample_catalog, IDX)
>>> index.__class__
<class 'Products.ZCTextIndex.ZCTextIndex.ZCTextIndex'>
>>> lexicon = sample_catalog[LEXICON]
>>> lexicon.__class__
<class 'Products.ZCTextIndex.ZCTextIndex.PLexicon'>
>>> capi.del_index(sample_catalog, IDX)
True
Catalog Columns
Getting a list of all catalog columns
>>> sample_catalog = capi.get_catalog(SAMPLE_CATALOG)
>>> columns = capi.get_columns(sample_catalog)
>>> "getId" in columns
True
Adding a column to the catalog:
>>> COLUMN = "my_fancy_column"
>>> capi.add_column(sample_catalog, COLUMN)
True
Check if the column exists:
>>> COLUMN in capi.get_columns(sample_catalog)
True
Delete the column:
>>> capi.del_column(sample_catalog, COLUMN)
True
Check if the column was deleted:
>>> COLUMN in capi.get_columns(sample_catalog)
False
Searchable Text Querystring
https://zope.readthedocs.io/en/latest/zopebook/SearchingZCatalog.html#boolean-expressions
Searching for a single word:
>>> capi.to_searchable_text_qs("sample")
u'sample*'
Without wildcard:
>>> capi.to_searchable_text_qs("sample", wildcard=False)
u'sample'
Wildcards at the beginning of the searchterms are not supported:
>>> capi.to_searchable_text_qs("?H2O")
u'H2O*'
>>> capi.to_searchable_text_qs("*H2O")
u'H2O*'
Wildcards at the end of the searchterms are retained:
>>> capi.to_searchable_text_qs("H2O?")
u'H2O?'
>>> capi.to_searchable_text_qs("H2O*")
u'H2O*'
If the search contains only a single character, it needs to be a word:
>>> capi.to_searchable_text_qs("W")
u'W*'
>>> capi.to_searchable_text_qs("$")
u''
Searching for a unicode word:
>>> capi.to_searchable_text_qs("AäOöUüZ")
u'A\xe4O\xf6U\xfcZ*'
Searching for multiple unicode words:
>>> capi.to_searchable_text_qs("Ä Ö Ü")
u'\xc4* AND \xd6* AND \xdc*'
Searching for a concatenated word:
>>> capi.to_searchable_text_qs("H2O-0001")
u'H2O-0001*'
Searching for two words:
>>> capi.to_searchable_text_qs("Fresh Funky")
u'Fresh* AND Funky*'
Tricky query strings (with and/or in words or in between):
>>> capi.to_searchable_text_qs("Fresh and Funky Oranges from Andorra")
u'Fresh* AND Funky* AND Oranges* AND from* AND Andorra*'
Search with special characters:
>>> capi.to_searchable_text_qs("H2O_0001")
u'H2O_0001*'
>>> capi.to_searchable_text_qs("H2O.0001")
u'H2O.0001*'
>>> capi.to_searchable_text_qs("H2O<>0001")
u'H2O<>0001*'
>>> capi.to_searchable_text_qs("H2O:0001")
u'H2O:0001*'
>>> capi.to_searchable_text_qs("H2O/0001")
u'H2O/0001*'
>>> capi.to_searchable_text_qs("'H2O-0001'")
u'H2O-0001*'
>>> capi.to_searchable_text_qs("\'H2O-0001\'")
u'H2O-0001*'
>>> capi.to_searchable_text_qs("(H2O-0001)*")
u'H2O-0001*'
>>> capi.to_searchable_text_qs("****([H2O-0001])****")
u'H2O-0001*'
>>> capi.to_searchable_text_qs("********************")
u''
>>> capi.to_searchable_text_qs("*H2O*")
u'H2O*'
>>> capi.to_searchable_text_qs("And the question is AND OR maybe NOT AND")
u'the* AND question* AND is* AND OR maybe* AND NOT*'
>>> capi.to_searchable_text_qs("AND OR")
u''
>>> capi.to_searchable_text_qs("H2O NOT 11")
u'H2O* AND NOT* AND 11*'
SENAITE datetime API
The datetime API provides fuctions to handle Python datetime and Zope’s DateTime objects.
Running this test from the buildout directory:
bin/test test_textual_doctests -t API_datetime
Test Setup
Imports:
>>> from bika.lims.api import get_tool
>>> from senaite.core.api import dtime
Define some variables:
>>> DATEFORMAT = "%Y-%m-%d %H:%M"
Test fixture:
>>> import os
>>> os.environ["TZ"] = "CET"
Setup the test user
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Check if an object is a Python datetime
>>> from datetime import datetime
>>> dtime.is_dt(datetime.now())
True
>>> dtime.is_dt("2021-12-24")
False
Check if an object is a Python date
>>> from datetime import date
>>> dtime.is_d(date.today())
True
>>> dtime.is_d("2022-01-01")
False
Check if an object is a ZOPE DateTime
>>> from DateTime import DateTime
>>> dtime.is_DT(DateTime())
True
>>> dtime.is_DT("2021-12-24")
False
Check if an object represents a date
>>> dtime.is_date(date.today())
True
>>> dtime.is_date(datetime.now())
True
>>> dtime.is_date(DateTime())
True
>>> dtime.is_date("2021-12-24")
True
>>> dtime.is_date("2021-12-24T12:00:00")
True
>>> dtime.is_date("2021-12-24T12:00:00+01:00")
True
>>> dtime.is_date("Hello World")
False
>>> dtime.is_date(object())
False
Check if a datetime object is TZ naive
>>> dtime.is_timezone_naive(date.today())
True
>>> dtime.is_timezone_naive(datetime.now())
True
>>> dtime.is_timezone_naive(DateTime())
False
>>> dtime.is_timezone_naive("2021-12-24")
True
>>> dtime.is_timezone_naive("2021-12-24T12:00:00")
True
>>> dtime.is_timezone_naive("2021-12-24T12:00:00+01:00")
False
Check if a datetime object is TZ aware
>>> dtime.is_timezone_aware(date.today())
False
>>> dtime.is_timezone_aware(datetime.now())
False
>>> dtime.is_timezone_aware(DateTime())
True
>>> dtime.is_timezone_aware("2021-12-24")
False
>>> dtime.is_timezone_aware("2021-12-24T12:00:00")
False
>>> dtime.is_timezone_aware("2021-12-24T12:00:00+01:00")
True
Convert to DateTime
>>> DATE = "2021-12-24 12:00"
Timezone naive datetimes are converterd to GMT+0:
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dt
datetime.datetime(2021, 12, 24, 12, 0)
>>> dtime.to_DT(DATE)
DateTime('2021/12/24 12:00:00 GMT+0')
>>> dtime.to_DT(dt)
DateTime('2021/12/24 12:00:00 GMT+0')
>>> DATE = "2021-08-01 12:00"
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dt
datetime.datetime(2021, 8, 1, 12, 0)
>>> dtime.to_DT(dt)
DateTime('2021/08/01 12:00:00 GMT+0')
>>> dtime.to_DT(date.fromtimestamp(0))
DateTime('1970/01/01 00:00:00 GMT+0')
Timezone aware datetimes are converterd to GMT+<tzoffset>
>>> local_dt = dtime.to_zone(dt, "CET")
>>> local_dt
datetime.datetime(2021, 8, 1, 12, 0, tzinfo=<DstTzInfo 'CET' CEST+2:00:00 DST>)
>>> dtime.to_DT(local_dt)
DateTime('2021/08/01 12:00:00 GMT+2')
Convert to datetime
>>> dt = dtime.to_dt(DateTime())
>>> isinstance(dt, datetime)
True
Timezone naive DateTime is converted with Etc/GMT timezone:
>>> dt = DateTime(DATE)
>>> dt
DateTime('2021/08/01 12:00:00 GMT+0')
>>> dtime.is_timezone_naive(dt)
True
>>> dtime.to_dt(dt)
datetime.datetime(2021, 8, 1, 12, 0, tzinfo=<StaticTzInfo 'Etc/GMT'>)
Timezone aware DateTime is converted with timezone.
>>> dt = dtime.to_zone(dt, "CET")
>>> dtime.is_timezone_naive(dt)
False
>>> dt
DateTime('2021/08/01 13:00:00 GMT+1')
>>> dtime.to_dt(dt)
datetime.datetime(2021, 8, 1, 13, 0, tzinfo=<StaticTzInfo 'Etc/GMT-1'>)
Get the timezone
Get the timezone from DateTime objects:
>>> dtime.get_timezone(DateTime("2022-02-25"))
'Etc/GMT'
>>> dtime.get_timezone(DateTime("2022-02-25 12:00 GMT+2"))
'Etc/GMT-2'
>>> dtime.get_timezone(DateTime("2022-02-25 12:00 GMT-2"))
'Etc/GMT+2'
Get the timezone from datetime.datetime objects:
>>> DATE = "2021-12-24 12:00"
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dtime.get_timezone(dt)
'Etc/GMT'
>>> dtime.get_timezone(dtime.to_zone(dt, "Europe/Berlin"))
'CET'
Get the timezone from datetime.date objects:
>>> dtime.get_timezone(dt.date)
'Etc/GMT'
Get the timezone info
Get the timezone info from TZ name:
>>> dtime.get_tzinfo("Etc/GMT")
<StaticTzInfo 'Etc/GMT'>
>>> dtime.get_tzinfo("Pacific/Fiji")
<DstTzInfo 'Pacific/Fiji' LMT+11:56:00 STD>
>>> dtime.get_tzinfo("UTC")
<UTC>
Get the timezone info from DateTime objects:
>>> dtime.get_tzinfo(DateTime("2022-02-25"))
<StaticTzInfo 'Etc/GMT'>
>>> dtime.get_tzinfo(DateTime("2022-02-25 12:00 GMT+2"))
<StaticTzInfo 'Etc/GMT-2'>
>>> dtime.get_tzinfo(DateTime("2022-02-25 12:00 GMT-2"))
<StaticTzInfo 'Etc/GMT+2'>
Get the timezone info from datetime.datetime objects:
>>> DATE = "2021-12-24 12:00"
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dtime.get_tzinfo(dt)
<UTC>
>>> dtime.get_tzinfo(dtime.to_zone(dt, "Europe/Berlin"))
<DstTzInfo 'CET' CET+1:00:00 STD>
Get the timezone info from datetime.date objects:
>>> dtime.get_tzinfo(dt.date)
<UTC>
Getting the timezone info from a naive date returns default timezone info:
>>> dt_naive = dt.replace(tzinfo=None)
>>> dtime.get_tzinfo(dt_naive)
<UTC>
>>> dtime.get_tzinfo(dt_naive, default="Pacific/Fiji")
<DstTzInfo 'Pacific/Fiji' LMT+11:56:00 STD>
We can use a timezone info as the default parameter as well:
>>> dtime.get_tzinfo(dt_naive, default=dtime.pytz.UTC)
<UTC>
Default can also be a timezone name:
>>> dtime.get_tzinfo(dt_naive, default="America/Port_of_Spain")
<DstTzInfo 'America/Port_of_Spain' LMT-1 day, 19:36:00 STD>
And an error is rised if default is not a valid timezone, even if the date
passed-in is valid:
>>> dtime.get_tzinfo(dt_naive, default="Atlantida")
Traceback (most recent call last):
...
UnknownTimeZoneError: 'Atlantida'
Check if timezone is valid
>>> dtime.is_valid_timezone("Etc/GMT-1")
True
>>> dtime.is_valid_timezone("Etc/GMT-0100")
False
>>> dtime.is_valid_timezone("Europe/Berlin")
True
>>> dtime.is_valid_timezone("UTC")
True
>>> dtime.is_valid_timezone("CET")
True
>>> dtime.is_valid_timezone("CEST")
False
Get the default timezone from the system
>>> import os
>>> import time
>>> os.environ["TZ"] = "Europe/Berlin"
>>> dtime.get_os_timezone()
'Europe/Berlin'
>>> os.environ["TZ"] = ""
>>> dtime.time.tzname = ("CET", "CEST")
>>> dtime.get_os_timezone()
'CET'
Convert date to timezone
>>> DATE = "1970-01-01 01:00"
Convert datetime objects to a timezone:
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dt_utc = dtime.to_zone(dt, "UTC")
>>> dt_utc
datetime.datetime(1970, 1, 1, 1, 0, tzinfo=<UTC>)
>>> dtime.to_zone(dt_utc, "CET")
datetime.datetime(1970, 1, 1, 2, 0, tzinfo=<DstTzInfo 'CET' CET+1:00:00 STD>)
Convert date objects to a timezone (converts to datetime):
>>> d = date.fromordinal(dt.toordinal())
>>> d_utc = dtime.to_zone(d, "UTC")
>>> d_utc
datetime.datetime(1970, 1, 1, 0, 0, tzinfo=<UTC>)
Convert DateTime objects to a timezone:
>>> DT = DateTime(DATE)
>>> DT_utc = dtime.to_zone(DT, "UTC")
>>> DT_utc
DateTime('1970/01/01 01:00:00 UTC')
>>> dtime.to_zone(DT_utc, "CET")
DateTime('1970/01/01 02:00:00 GMT+1')
Make a POSIX timestamp
>>> DATE = "1970-01-01 01:00"
>>> DT = DateTime(DATE)
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dtime.to_timestamp(DATE)
3600.0
>>> dtime.to_timestamp(dt)
3600.0
>>> dtime.to_timestamp(DT)
3600.0
>>> dtime.from_timestamp(dtime.to_timestamp(dt)) == dt
True
Convert date to string
Check with valid date:
>>> DATE = "2022-08-01 12:00"
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dtime.date_to_string(dt)
'2022-08-01'
>>> dtime.date_to_string(dt, fmt="%H:%M")
'12:00'
>>> dtime.date_to_string(dt, fmt="%Y-%m-%dT%H:%M")
'2022-08-01T12:00'
Check if the ValueError: strftime() methods require year >= 1900 is handled gracefully:
>>> DATE = "1010-11-12 22:23"
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dtime.date_to_string(dt)
'1010-11-12'
>>> dtime.date_to_string(dt, fmt="%H:%M")
'22:23'
>>> dtime.date_to_string(dt, fmt="%Y-%m-%dT%H:%M")
'1010-11-12T22:23'
>>> dtime.date_to_string(dt, fmt="%Y-%m-%d %H:%M")
'1010-11-12 22:23'
>>> dtime.date_to_string(dt, fmt="%Y/%m/%d %H:%M")
'1010/11/12 22:23'
Check the same with DateTime objects:
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> DT = dtime.to_DT(dt)
>>> dtime.date_to_string(DT)
'1010-11-12'
>>> dtime.date_to_string(DT, fmt="%H:%M")
'22:23'
>>> dtime.date_to_string(DT, fmt="%Y-%m-%dT%H:%M")
'1010-11-12T22:23'
>>> dtime.date_to_string(DT, fmt="%Y-%m-%d %H:%M")
'1010-11-12 22:23'
>>> dtime.date_to_string(DT, fmt="%Y/%m/%d %H:%M")
'1010/11/12 22:23'
Check paddings in hour/minute:
>>> DATE = "2022-08-01 01:02"
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dtime.date_to_string(dt, fmt="%Y-%m-%d %H:%M")
'2022-08-01 01:02'
>>> DATE = "1755-08-01 01:02"
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dtime.date_to_string(dt, fmt="%Y-%m-%d %H:%M")
'1755-08-01 01:02'
Check 24h vs 12h format:
>>> DATE = "2022-08-01 23:01"
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dtime.date_to_string(dt, fmt="%Y-%m-%d %I:%M %p")
'2022-08-01 11:01 PM'
>>> DATE = "1755-08-01 23:01"
>>> dt = datetime.strptime(DATE, DATEFORMAT)
>>> dtime.date_to_string(dt, fmt="%Y-%m-%d %I:%M %p")
'1755-08-01 11:01 PM'
Localization
Values returned by TranslationService and dtime’s ulocalized_time are
consistent:
>>> ts = get_tool("translation_service")
>>> dt = "2022-12-14"
>>> ts_dt = ts.ulocalized_time(dt, long_format=True, domain="senaite.core")
>>> dt_dt = dtime.to_localized_time(dt, long_format=True)
>>> ts_dt == dt_dt
True
>>> dt = datetime(2022,12,14)
>>> ts_dt = ts.ulocalized_time(dt, long_format=True, domain="senaite.core")
>>> dt_dt = dtime.to_localized_time(dt, long_format=True)
>>> ts_dt == dt_dt
True
>>> dt = DateTime(2022,12,14)
>>> ts_dt = ts.ulocalized_time(dt, long_format=True, domain="senaite.core")
>>> dt_dt = dtime.to_localized_time(dt, long_format=True)
>>> ts_dt == dt_dt
True
But when a date with a year before 1900 is used, dtime’s does fallback to
standard ISO format, while TranslationService fails:
>>> dt = "1889-12-14"
>>> ts.ulocalized_time(dt, long_format=True, domain="senaite.core")
Traceback (most recent call last):
...
ValueError: year=1889 is before 1900; the datetime strftime() methods require year >= 1900
>>> dtime.to_localized_time(dt, long_format=True)
'1889-12-14 00:00'
>>> dt = datetime(1889,12,14)
>>> ts.ulocalized_time(dt, long_format=True, domain="senaite.core")
Traceback (most recent call last):
...
ValueError: year=1889 is before 1900; the datetime strftime() methods require year >= 1900
>>> dtime.to_localized_time(dt, long_format=True)
'1889-12-14 00:00'
>>> dt = DateTime(1889,12,14)
>>> ts.ulocalized_time(dt, long_format=True, domain="senaite.core")
Traceback (most recent call last):
...
ValueError: year=1889 is before 1900; the datetime strftime() methods require year >= 1900
>>> dtime.to_localized_time(dt, long_format=True)
'1889-12-14 00:00'
Support for ANSI X3.30 and ANSI X3.43.3
The YYYYMMDD format is defined by ANSI X3.30. Therefore 2 December 1, 1989
would be represented as 19891201. When times are transmitted (ASTM), they
shall be represented as HHMMSS, and shall be linked to dates as specified by
ANSI X3.43.3 Date and time together shall be specified as up to a 14-character
string (YYYYMMDD[HHMMSS]
>>> dt = "19891201"
>>> dtime.ansi_to_dt(dt)
datetime.datetime(1989, 12, 1, 0, 0)
>>> dtime.to_DT(dt)
DateTime('1989/12/01 00:00:00 GMT+0')
>>> dt = "19891201131405"
>>> dtime.ansi_to_dt(dt)
datetime.datetime(1989, 12, 1, 13, 14, 5)
>>> dtime.to_DT(dt)
DateTime('1989/12/01 13:14:05 GMT+0')
>>> dt = "17891201131405"
>>> dtime.ansi_to_dt(dt)
datetime.datetime(1789, 12, 1, 13, 14, 5)
>>> dtime.to_DT(dt)
DateTime('1789/12/01 13:14:05 GMT+0')
>>> dt = "17891201132505"
>>> dtime.ansi_to_dt(dt)
datetime.datetime(1789, 12, 1, 13, 25, 5)
>>> dtime.to_DT(dt)
DateTime('1789/12/01 13:25:05 GMT+0')
>>> # No ANSI format
>>> dt = "230501"
>>> dtime.ansi_to_dt(dt)
Traceback (most recent call last):
...
ValueError: No ANSI format date
>>> # Month 13
>>> dt = "17891301132505"
>>> dtime.ansi_to_dt(dt)
Traceback (most recent call last):
...
ValueError: unconverted data remains: 5
>>> # Month 2, day 30
>>> dt = "20030230123408"
>>> dtime.ansi_to_dt(dt)
Traceback (most recent call last):
...
ValueError: day is out of range for month
>>> dtime.to_DT(dt) is None
True
We can also the other way round conversion. Simply giving a date in ant valid
string format:
>>> dt = "1989-12-01"
>>> dtime.to_ansi(dt, show_time=False)
'19891201'
>>> dtime.to_ansi(dt, show_time=True)
'19891201000000'
>>> dt = "19891201"
>>> dtime.to_ansi(dt, show_time=False)
'19891201'
>>> dtime.to_ansi(dt, show_time=True)
'19891201000000'
Or using datetime or DateTime as the input parameter:
>>> dt = "19891201131405"
>>> dt = dtime.ansi_to_dt(dt)
>>> dtime.to_ansi(dt, show_time=False)
'19891201'
>>> dtime.to_ansi(dt, show_time=True)
'19891201131405'
>>> DT = dtime.to_DT(dt)
>>> dtime.to_ansi(DT, show_time=False)
'19891201'
>>> dtime.to_ansi(DT, show_time=True)
'19891201131405'
We even suport dates that are long before epoch:
>>> min_date = dtime.datetime.min
>>> min_date
datetime.datetime(1, 1, 1, 0, 0)
>>> dtime.to_ansi(min_date)
'00010101000000'
Or long after epoch:
>>> max_date = dtime.datetime.max
>>> max_date
datetime.datetime(9999, 12, 31, 23, 59, 59, 999999)
>>> dtime.to_ansi(max_date)
'99991231235959'
Still, invalid dates return None:
>>> # Month 13
>>> dt = "17891301132505"
>>> dtime.to_ansi(dt) is None
True
>>> # Month 2, day 30
>>> dt = "20030230123408"
>>> dtime.to_ansi(dt) is None
True
Relative delta between two dates
We can extract the relative delta between two dates:
>>> dt1 = dtime.ansi_to_dt("20230515104405")
>>> dt2 = dtime.ansi_to_dt("20230515114405")
>>> dtime.get_relative_delta(dt1, dt2)
relativedelta(hours=+1)
We can even compare two dates from two different timezones:
>>> dt1_cet = dtime.to_zone(dt1, "CET")
>>> dt2_utc = dtime.to_zone(dt2, "UTC")
>>> dtime.get_relative_delta(dt1_cet, dt2_utc)
relativedelta(hours=+3)
>>> dt1_cet = dtime.to_zone(dt1, "CET")
>>> dt2_pcf = dtime.to_zone(dt2, "Pacific/Fiji")
>>> dtime.get_relative_delta(dt1_cet, dt2_pcf)
relativedelta(hours=-9)
If we compare a naive timezone, system uses the timezone of the other date:
>>> dt1_cet = dtime.to_zone(dt1, "CET")
>>> dt2_naive = dt2.replace(tzinfo=None)
>>> dtime.get_relative_delta(dt1_cet, dt2_naive)
relativedelta(hours=+3)
It also works when both are timezone naive:
>>> dt1_naive = dt1.replace(tzinfo=None)
>>> dt2_naive = dt2.replace(tzinfo=None)
>>> dtime.get_relative_delta(dt1_naive, dt2_naive)
relativedelta(hours=+1)
If we don’t specify dt2, system simply uses current datetime:
>>> rel_now = dtime.get_relative_delta(dt1, datetime.now())
>>> rel_wo = dtime.get_relative_delta(dt1)
>>> rel_now = (rel_now.years, rel_now.months, rel_now.days, rel_now.hours)
>>> rel_wo = (rel_wo.years, rel_wo.months, rel_wo.days, rel_wo.hours)
>>> rel_now == rel_wo
True
We can even compare min and max dates:
>>> dt1 = dtime.datetime.min
>>> dt2 = dtime.datetime.max
>>> dtime.get_relative_delta(dtime.datetime.min, dtime.datetime.max)
relativedelta(years=+9998, months=+11, days=+30, hours=+23, minutes=+59, seconds=+59, microseconds=+999999)
We can even call the function with types that are not datetime, but can be
converted to datetime:
>>> dtime.get_relative_delta("19891201131405", "20230515114400")
relativedelta(years=+33, months=+5, days=+13, hours=+22, minutes=+29, seconds=+55)
But raises a ValueError if non-valid dates are used:
>>> dtime.get_relative_delta("17891301132505")
Traceback (most recent call last):
...
ValueError: No valid date or dates
Even if the from date is correct, but not the to date:
>>> dtime.get_relative_delta("19891201131405", "20230535114400")
Traceback (most recent call last):
...
ValueError: No valid date or dates
We can also compare two datetimes, being the “from” earlier than “to”:
>>> dtime.get_relative_delta("20230515114400", "19891201131405")
relativedelta(years=-33, months=-5, days=-13, hours=-22, minutes=-29, seconds=-55)
Or compare two dates that are exactly the same:
>>> dtime.get_relative_delta("20230515114400", "20230515114400")
relativedelta()
We can compare dates without time as well:
>>> from_date = dtime.date(2023, 5, 6)
>>> to_date = dtime.date(2023, 5, 7)
>>> dtime.get_relative_delta(from_date, to_date)
relativedelta(days=+1)
SENAITE geo API
The geo API provides functions for search and manipulation of geographic
entities/locations, like countries and subdivisions.
Running this test from the buildout directory:
bin/test test_textual_doctests -t API_geo
Test Setup
Imports
>>> from senaite.core.api import geo
Get all countries
>>> countries = geo.get_countries()
>>> len(countries)
249
Check if an object is a country
>>> geo.is_country(countries[0])
True
>>> geo.is_country("Spain")
False
Get a country by term
>>> geo.get_country("es")
Country(alpha_2=u'ES', alpha_3=u'ESP', name=u'Spain', numeric=u'724', official_name=u'Kingdom of Spain')
>>> geo.get_country("Spain")
Country(alpha_2=u'ES', alpha_3=u'ESP', name=u'Spain', numeric=u'724', official_name=u'Kingdom of Spain')
>>> geo.get_country("Kingdom of Spain")
Country(alpha_2=u'ES', alpha_3=u'ESP', name=u'Spain', numeric=u'724', official_name=u'Kingdom of Spain')
Get a non-existing country
>>> geo.get_country("Pluto")
Traceback (most recent call last):
[...]
ValueError: Could not find a record for 'pluto'
>>> geo.get_country("Pluto", default=None) is None
True
Get a subdivision or country
We can directly retrieve a subdivision or a country in a single call:
>>> geo.get_country_or_subdivision("Spain")
Country(alpha_2=u'ES', alpha_3=u'ESP', name=u'Spain', numeric=u'724', official_name=u'Kingdom of Spain')
>>> geo.get_country_or_subdivision("Catalunya")
Subdivision(code=u'ES-CT', country_code=u'ES', name=u'Catalunya', parent_code=None, type=u'Autonomous community')
>>> geo.get_country_or_subdivision("Pluto")
Traceback (most recent call last):
[...]
ValueError: Could not find a record for 'pluto'
Get subdivisions of a country
We can get the subdivisions immediately below a Country object, sorted by code:
>>> country = geo.get_country("es")
>>> subdivisions = geo.get_subdivisions(country)
>>> subdivisions[0]
Subdivision(code=u'ES-AN', country_code=u'ES', name=u'Andaluc\xeda', parent_code=None, type=u'Autonomous community')
Or we can directluy get them with any search term for country:
>>> subdivisions = geo.get_subdivisions("es")
>>> subdivisions[0]
Subdivision(code=u'ES-AN', country_code=u'ES', name=u'Andaluc\xeda', parent_code=None, type=u'Autonomous community')
Check if an object is a Subdivision
>>> geo.is_subdivision(subdivisions[0])
True
>>> geo.is_subdivision(country)
False
>>> geo.is_subdivision("Catalunya")
False
>>> geo.is_country(subdivisions[0])
False
Get subdivisions of a subdivision
Likewise, we can get the subdivisions immediately below a Subdivision object,
sorted by code:
>>> subdivisions = geo.get_subdivisions("es")
>>> subsubdivisions = geo.get_subdivisions(subdivisions[0])
>>> subsubdivisions[0]
Subdivision(code=u'ES-AL', country_code=u'ES', name=u'Almer\xeda', parent=u'AN', parent_code=u'ES-AN', type=u'Province')
>>> len(subsubdivisions)
8
Get the code of a country
We can obtain the 2-letter code of a country directly:
>>> geo.get_country_code(country)
u'ES'
Or from any of its subdivisions:
>>> geo.get_country_code(subdivisions[0])
u'ES'
>>> geo.get_country_code(subsubdivisions[0])
u'ES'
We can even get the country code with only text:
>>> geo.get_country_code("Spain")
u'ES'
>>> geo.get_country_code("Germany")
u'DE'
Get a subdivision
Is also possible to retrieve a subdivision by search term directly:
>>> geo.get_subdivision("ES-CA")
Subdivision(code=u'ES-CA', country_code=u'ES', name=u'C\xe1diz', parent=u'AN', parent_code=u'ES-AN', type=u'Province')
>>> geo.get_subdivision("Catalunya")
Subdivision(code=u'ES-CT', country_code=u'ES', name=u'Catalunya', parent_code=None, type=u'Autonomous community')
>>> geo.get_subdivision("Washington")
Subdivision(code=u'US-WA', country_code=u'US', name=u'Washington', parent_code=None, type=u'State')
>>> geo.get_subdivision("Barcelona")
Subdivision(code=u'ES-B', country_code=u'ES', name=u'Barcelona', parent=u'CT', parent_code=u'ES-CT', type=u'Province')
We can also specify the parent:
>>> spain = geo.get_country("es")
>>> catalunya = geo.get_subdivision("Catalunya", parent=spain)
>>> catalunya
Subdivision(code=u'ES-CT', country_code=u'ES', name=u'Catalunya', parent_code=None, type=u'Autonomous community')
So only subdivisions immediately below the specified parent are returned:
>>> geo.get_subdivision("Barcelona", parent=spain, default=None) is None
True
>>> geo.get_subdivision("Barcelona", parent=catalunya)
Subdivision(code=u'ES-B', country_code=u'ES', name=u'Barcelona', parent=u'CT', parent_code=u'ES-CT', type=u'Province')
We can even specify a search term for the parent:
>>> geo.get_subdivision("Barcelona", parent="Catalunya")
Subdivision(code=u'ES-B', country_code=u'ES', name=u'Barcelona', parent=u'CT', parent_code=u'ES-CT', type=u'Province')
API for sending emails
The mail API provides a simple interface to send emails in SENAITE.
- NOTE: The API is called mail to avoid import conflicts with the Python email
- standard library.
Running this test from the buildout directory:
bin/test test_textual_doctests -t API_mail
Test Setup
Imports:
>>> import os
>>> from __future__ import print_function
>>> from bika.lims.api.mail import *
Variables:
>>> cur_dir = os.path.dirname(__file__)
>>> filename = "logo.png"
>>> filepath = os.path.join(cur_dir, filename)
Email Address
This function converts an email address and name pair to a string value suitable
for an RFC 2822 From, To or Cc header:
>>> to_address = to_email_address("rb@ridingbytes.com", "Ramon Bartl")
>>> to_address
'Ramon Bartl <rb@ridingbytes.com>'
>>> to_email_address("rb@ridingbytes.com")
'rb@ridingbytes.com'
Email Subject
This function converts a string to a compliant RFC 2822 subject header:
>>> subject = u"Liberté"
>>> email_subject = to_email_subject(subject)
>>> email_subject
<email.header.Header instance at ...>
>>> print(email_subject)
=?utf-8?q?Libert=C3=83=C2=A9?=
Email Body Text
This function coverts a given text to a text/plain MIME document:
>>> text = "Check out SENAITE LIMS: $url"
>>> email_body = to_email_body_text(text, url="https://www.senaite.com")
>>> email_body
<email.mime.text.MIMEText instance at ...>
>>> email_body.get_content_type()
'text/plain'
>>> print(email_body)
From ...
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<BLANKLINE>
Check out SENAITE LIMS: https://www.senaite.com
This function coverts a given text to a text/html MIME document:
>>> html = "<p>Check out <strong>SENAITE LIMS:</strong> $url"
>>> email_body = to_email_body_text(html, url="https://www.senaite.com", html=True)
>>> email_body
<email.mime.text.MIMEText instance at ...>
>>> email_body.get_content_type()
'text/html'
>>> print(email_body)
From ...
MIME-Version: 1.0
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<BLANKLINE>
<p>Check out <strong>SENAITE LIMS:</strong> https://www.senaite.com
Email Attachment
This function converts a filename with given filedata to a MIME attachment:
>>> attachment1 = to_email_attachment(file(filepath), filename=filename)
>>> attachment1
<email.mime.base.MIMEBase instance at ...>
>>> print(attachment1)
From ...
Content-Type: image/png
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=logo.png
<BLANKLINE>
iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABGdBTUEAALGPC/xhBQAAACBjSFJN
...
5/sfV5M/kISv300AAAAASUVORK5CYII=
It is also possible to provide the full path to a file:
>>> attachment2 = to_email_attachment(filepath)
>>> attachment2
<email.mime.base.MIMEBase instance at ...>
>>> print(attachment2)
From ...
Content-Type: image/png
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=logo.png
<BLANKLINE>
iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABGdBTUEAALGPC/xhBQAAACBjSFJN
...
5/sfV5M/kISv300AAAAASUVORK5CYII=
Providing an attachment works as well:
>>> attachment3 = to_email_attachment(attachment2)
>>> attachment3 == attachment2
True
Email Address Validation
This function checks if the given email address is valid:
>>> is_valid_email_address("rb@ridingbytes.com")
True
>>> is_valid_email_address(u"rb@ridingbytes.de")
True
>>> is_valid_email_address("rb@ridingbytes")
False
>>> is_valid_email_address("@ridingbyte.com")
False
>>> is_valid_email_address("rb")
False
>>> is_valid_email_address(None)
False
>>> is_valid_email_address(object())
False
Parse Email Address
This function parses an email address string into a (name, email) tuple:
>>> parse_email_address("Ramon Bartl <rb@ridingbytes.com>")
('Ramon Bartl', 'rb@ridingbytes.com')
>>> parse_email_address("<rb@ridingbytes.com>")
('', 'rb@ridingbytes.com')
>>> parse_email_address("rb@ridingbytes.com")
('', 'rb@ridingbytes.com')
Compose Email
This function composes a new MIME message:
>>> message = compose_email("from@senaite.com",
... ["to@senaite.com", "to2@senaite.com"],
... "Test Émail",
... "Check out the new SENAITE website: $url",
... attachments=[filepath],
... url="https://www.senaite.com")
>>> message
<email.mime.multipart.MIMEMultipart instance at ...>
>>> print(message)
From ...
Content-Type: multipart/mixed; boundary="..."
MIME-Version: 1.0
Subject: =?utf-8?q?Test_=C3=89mail?=
From: from@senaite.com
To: to@senaite.com, to2@senaite.com
<BLANKLINE>
This is a multi-part message in MIME format.
<BLANKLINE>
...
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<BLANKLINE>
Check out the new SENAITE website: https://www.senaite.com
...
Content-Type: image/png
MIME-Version: 1.0
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=logo.png
<BLANKLINE>
iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABGdBTUEAALGPC/xhBQAAACBjSFJN
...
5/sfV5M/kISv300AAAAASUVORK5CYII=
...
<BLANKLINE>
By default, the body is not encoded as html:
>>> body = "<p>Check out the new SENAITE website: $url</p>"
>>> message = compose_email("from@senaite.com",
... ["to@senaite.com", "to2@senaite.com"],
... "Test Émail",
... body,
... attachments=[filepath],
... url="https://www.senaite.com")
>>> print(message)
From ...
...
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<BLANKLINE>
<p>Check out the new SENAITE website: https://www.senaite.com</p>
...
Unless the html parameter is set to True:
>>> message = compose_email("from@senaite.com",
... ["to@senaite.com", "to2@senaite.com"],
... "Test Émail",
... body,
... html=True,
... attachments=[filepath],
... url="https://www.senaite.com")
>>> print(message)
From ...
...
MIME-Version: 1.0
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<BLANKLINE>
<p>Check out the new SENAITE website: https://www.senaite.com</p>
...
API Measure
The api_measure provides functions to operate with physical quantities and
units.
Running this test from the buildout directory:
bin/test test_textual_doctests -t API_measure
Test Setup
Needed Imports
>>> from senaite.core.api import measure as mapi
Get a magnitude object
Magnitude-type objects are used to operate with physical quantities and
conversions while ensuring unit consistency:
>>> mapi.get_magnitude("10.0mL")
<magnitude.Magnitude instance at ...>
>>> mapi.get_magnitude("15 ml")
<magnitude.Magnitude instance at ...>
>>> mapi.get_magnitude("0.23mg/L")
<magnitude.Magnitude instance at ...>
If no valid units are provided, an error arises:
>>> mapi.get_magnitude("0.23po")
Traceback (most recent call last):
[...]
APIError: Don't know about unit po
An error arises too when the value is not from a valid type:
>>> mapi.get_magnitude(None)
Traceback (most recent call last):
[...]
APIError: None is not supported.
>>> mapi.get_magnitude((10, "ml"))
Traceback (most recent call last):
[...]
APIError: (10, 'ml') is not supported.
An error arises if the value type is valid but with wrong format:
>>> mapi.get_magnitude("1")
Traceback (most recent call last):
[...]
APIError: No valid format: 1
>>> mapi.get_magnitude("ml")
Traceback (most recent call last):
[...]
APIError: No valid format: ml
>>> mapi.get_magnitude("10ml 12ml")
Traceback (most recent call last):
[...]
APIError: Don't know about unit 12ml
>>> mapi.get_magnitude("10 20.34 ml")
Traceback (most recent call last):
[...]
APIError: Don't know about unit 20.3
We can also pass another magnitude as the value:
>>> mg = mapi.get_magnitude("10ml")
>>> mapi.get_magnitude(mg)
<magnitude.Magnitude instance at ...>
We can make use of the default param for fallback return:
>>> mapi.get_magnitude(None, default="10ml")
<magnitude.Magnitude instance at ...>
>>> mg = mapi.get_magnitude("10.0ml")
>>> mapi.get_magnitude(None, default=mg)
<magnitude.Magnitude instance at ...>
But default must be convertible too:
>>> mapi.get_magnitude(None, default=None)
Traceback (most recent call last):
[...]
APIError: None is not supported.
>>> mapi.get_magnitude(None, default="1")
Traceback (most recent call last):
[...]
APIError: No valid format: 1
Check if the value is from magnitude type
We can check if a given value is an instance of a magnitude type as follows:
>>> mapi.is_magnitude(None)
False
>>> mapi.is_magnitude(object())
False
>>> mapi.is_magnitude("10ml")
False
>>> mg = mapi.get_magnitude("10ml")
>>> mapi.is_magnitude(mg)
True
Get the float quantity
We can easily get the quantity part of the value as a float:
>>> mapi.get_quantity("10ml")
10.0
>>> mapi.get_quantity("10.4g")
10.4
We can even pass a Magnitude object:
>>> mg = mapi.get_magnitude("10.5 mL")
>>> mapi.get_quantity(mg)
10.5
But an error arises if the value is not suitable:
>>> mapi.get_quantity(None)
Traceback (most recent call last):
[...]
APIError: None is not supported.
>>> mapi.get_quantity("1")
Traceback (most recent call last):
[...]
APIError: No valid format: 1
>>> mapi.get_quantity("0.23po")
Traceback (most recent call last):
[...]
APIError: Don't know about unit po
Conversion of a quantity to another unit
We can easily convert a quantity to another unit:
>>> mapi.get_quantity("1mL", unit="L")
0.001
>>> mapi.get_quantity("1mL", unit="dL")
0.01
>>> mapi.get_quantity("10.2L", unit="mL")
10200.0
Check volumes
The api makes the check of volumes easy:
>>> mapi.is_volume("10mL")
True
>>> mapi.is_volume("2.3 L")
True
>>> mapi.is_volume("0.02 dl")
True
>>> mapi.is_volume("10mg")
False
>>> mapi.is_volume("2.3 kg")
False
>>> mapi.is_volume("0.02 dg")
False
>>> mapi.is_volume(2)
False
>>> mapi.is_volume(None)
False
Check weights
The api makes the check of volumes easy:
>>> mapi.is_weight("10mg")
True
>>> mapi.is_weight("2.3 kg")
True
>>> mapi.is_weight("0.02 dg")
True
>>> mapi.is_weight("10mL")
False
>>> mapi.is_weight("2.3 L")
False
>>> mapi.is_weight("0.02 dl")
False
>>> mapi.is_weight(2)
False
>>> mapi.is_weight(None)
False
API Security
The security API provides a simple interface to control access in SENAITE
Running this test from the buildout directory:
bin/test test_textual_doctests -t API_security
Test Setup
Needed Imports:
>>> from bika.lims import api
>>> from bika.lims.api.security import *
>>> from senaite.core.permissions import FieldEditAnalysisHidden
>>> from senaite.core.permissions import FieldEditAnalysisResult
>>> from senaite.core.permissions import FieldEditAnalysisRemarks
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_sample(services):
... values = {
... "Client": client.UID(),
... "Contact": contact.UID(),
... "DateSampled": date_now,
... "SampleType": sampletype.UID()}
... service_uids = map(api.get_uid, services)
... return create_analysisrequest(client, request, values, service_uids)
>>> def get_analysis(sample, id):
... ans = sample.getAnalyses(getId=id, full_objects=True)
... if len(ans) != 1:
... return None
... return ans[0]
Environment Setup
Setup the testing environment:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
>>> setRoles(portal, TEST_USER_ID, ['LabManager', ])
>>> user = api.get_current_user()
LIMS Setup
Setup the Lab for testing:
>>> setup.setSelfVerificationEnabled(True)
>>> analysisservices = setup.bika_analysisservices
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH")
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="Water")
Content Setup
Create some Analysis Services with unique Keywords:
>>> Ca = api.create(analysisservices, "AnalysisService", title="Calcium", Keyword="Ca")
>>> Mg = api.create(analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg")
>>> Cu = api.create(analysisservices, "AnalysisService", title="Copper", Keyword="Cu")
>>> Fe = api.create(analysisservices, "AnalysisService", title="Iron", Keyword="Fe")
>>> Au = api.create(analysisservices, "AnalysisService", title="Aurum", Keyword="Au")
>>> Test1 = api.create(analysisservices, "AnalysisService", title="Calculated Test Service 1", Keyword="Test1")
>>> Test2 = api.create(analysisservices, "AnalysisService", title="Calculated Test Service 2", Keyword="Test2")
Create an new Sample:
>>> sample = new_sample([Cu, Fe, Au])
Get the contained Cu Analysis:
>>> cu = get_analysis(sample, Cu.getKeyword())
Get a security manager for the current thread
A security manager provides methods for checking access and managing executable
context and policies:
>>> get_security_manager()
<AccessControl.ImplPython.SecurityManager instance at ...>
Get the possible permissions of an object
The possible permissions include the permissions on the object and the inherited
permissions:
>>> possible_permissions = get_possible_permissions_for(cu)
>>> "Modify portal content" in possible_permissions
True
Get the mapped permissions of an object
While the possible permissions return all possible permissions of the object,
only few of them are mapped to the object.
The function get_mapped_permissions_for returns only those permissions which
have roles mapped on the given object or on objects within the acquisition
chain.
>>> mapped_permissions = get_mapped_permissions_for(cu)
The mapped permissions are therefore a subset of the possible transitions:
>>> set(mapped_permissions).issubset(possible_permissions)
True
Get the granted permissions
This function returns the allowed permissions on an object for a user:
>>> allowed_permissions = get_allowed_permissions_for(cu)
The allowed permissions is a subset of the mapped permissions:
>>> set(allowed_permissions).issubset(mapped_permissions)
True
Get the non-granted permissions
This function returns the disallowed permissions on an object for a user:
>>> disallowed_permissions = get_disallowed_permissions_for(cu)
The disallowed permissions is a subset of the mapped permissions:
>>> set(disallowed_permissions).issubset(mapped_permissions)
True
It is mutual exclusive to the allowed permissions:
>>> set(disallowed_permissions).isdisjoint(allowed_permissions)
True
The allowed and disallowed permissions are exactly the mapped permissions:
>>> set(allowed_permissions + disallowed_permissions) == set(mapped_permissions)
True
Check if a user has a permission granted
This function checks if the user has a permission granted on an object:
>>> check_permission(get_allowed_permissions_for(cu)[0], cu)
True
>>> check_permission(get_disallowed_permissions_for(cu)[0], cu)
False
Non existing permissions are returned as False:
>>> check_permission("nonexisting_permission", cu)
False
Get the granted permissions of a role
This function returns the permissions that are granted to a role:
>>> get_permissions_for_role("Sampler", cu)
['senaite.core: Field: Edit Analysis Remarks', 'senaite.core: Field: Edit Analysis Result']
Get the mapped roles of a permission
This function is the opposite of get_permissions_for_role and returns
the roles for a given permission:
>>> get_roles_for_permission(FieldEditAnalysisResult, cu)
('LabManager', 'Manager', 'Sampler')
Get the roles of a user
This function returns the global roles the user has:
>>> get_roles()
['Authenticated', 'LabManager']
>>> setRoles(portal, TEST_USER_ID, ['LabManager', 'Sampler', ])
>>> get_roles()
['Authenticated', 'LabManager', 'Sampler']
The optional user parameter allows to get the roles of another user:
>>> get_roles("admin")
['Authenticated', 'Manager']
Get the local roles of a user
This function returns the local granted roles the user has for the given object:
>>> get_local_roles_for(cu)
['Owner']
The optional user parameter allows to get the local roles of another user:
>>> get_local_roles_for(cu, "admin")
[]
Granting local roles
This function allows to grant local roles on an object:
>>> grant_local_roles_for(cu, "Sampler")
['Owner', 'Sampler']
>>> grant_local_roles_for(cu, ["Analyst", "LabClerk"])
['Analyst', 'LabClerk', 'Owner', 'Sampler']
>>> get_local_roles_for(cu)
['Analyst', 'LabClerk', 'Owner', 'Sampler']
Revoking local roles
This function allows to revoke local roles on an object:
>>> revoke_local_roles_for(cu, "Sampler")
['Analyst', 'LabClerk', 'Owner']
>>> revoke_local_roles_for(cu, ["Analyst", "LabClerk"])
['Owner']
>>> get_local_roles_for(cu)
['Owner']
Getting all valid roles
This function lists all valid roles for an object:
>>> get_valid_roles_for(cu)
['Analyst', ...]
Granting a permission to a role
This function allows to grant a permission to one or more roles:
>>> get_permissions_for_role("Sampler", cu)
['senaite.core: Field: Edit Analysis Remarks', 'senaite.core: Field: Edit Analysis Result']
>>> grant_permission_for(cu, FieldEditAnalysisHidden, "Sampler", acquire=0)
>>> get_permissions_for_role("Sampler", cu)
['senaite.core: Field: Edit Analysis Hidden', 'senaite.core: Field: Edit Analysis Remarks', 'senaite.core: Field: Edit Analysis Result']
Revoking a permission from a role
This function allows to revoke a permission of one or more roles:
>>> revoke_permission_for(cu, FieldEditAnalysisHidden, "Sampler", acquire=0)
>>> get_permissions_for_role("Sampler", cu)
['senaite.core: Field: Edit Analysis Remarks', 'senaite.core: Field: Edit Analysis Result']
Manage permissions
This function allows to set a permission explicitly to the given roles (drop other roles):
>>> grant_permission_for(cu, FieldEditAnalysisResult, ["Analyst", "LabClerk"])
>>> get_permissions_for_role("Analyst", cu)
['senaite.core: Field: Edit Analysis Result']
>>> get_permissions_for_role("LabClerk", cu)
['senaite.core: Field: Edit Analysis Result']
Now we use manage_permission_for to grant this permission only for Samplers:
>>> manage_permission_for(cu, FieldEditAnalysisResult, ["Sampler"])
The Sampler has now the permission granted:
>>> get_permissions_for_role("Sampler", cu)
['senaite.core: Field: Edit Analysis Remarks', 'senaite.core: Field: Edit Analysis Result']
But the Analyst and LabClerk not anymore:
>>> get_permissions_for_role("Analyst", cu)
[]
>>> get_permissions_for_role("LabClerk", cu)
[]
API Snapshot
The snapshot API provides a simple interface to manage object snaphots.
Running this test from the buildout directory:
bin/test test_textual_doctests -t API_snapshot
Test Setup
Needed Imports:
>>> from bika.lims import api
>>> from bika.lims.api.snapshot import *
>>> from senaite.core.permissions import FieldEditAnalysisHidden
>>> from senaite.core.permissions import FieldEditAnalysisResult
>>> from senaite.core.permissions import FieldEditAnalysisRemarks
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from zope.lifecycleevent import modified
>>> from zope.component.globalregistry import getGlobalSiteManager
>>> from zope.lifecycleevent.interfaces import IObjectModifiedEvent
>>> from bika.lims.subscribers.auditlog import ObjectModifiedEventHandler
>>> from zope.interface import Interface
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_sample(services):
... values = {
... "Client": client.UID(),
... "Contact": contact.UID(),
... "DateSampled": date_now,
... "SampleType": sampletype.UID()}
... service_uids = map(api.get_uid, services)
... return create_analysisrequest(client, request, values, service_uids)
>>> def get_analysis(sample, id):
... ans = sample.getAnalyses(getId=id, full_objects=True)
... if len(ans) != 1:
... return None
... return ans[0]
>>> def register_event_subscribers():
... gsm = getGlobalSiteManager()
... gsm.registerHandler(ObjectModifiedEventHandler, (Interface, IObjectModifiedEvent))
>>> def unregister_event_subscribers():
... gsm = getGlobalSiteManager()
... gsm.unregisterHandler(ObjectModifiedEventHandler, (Interface, IObjectModifiedEvent))
Environment Setup
Setup the testing environment:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
>>> setRoles(portal, TEST_USER_ID, ['LabManager', ])
>>> user = api.get_current_user()
LIMS Setup
Setup the Lab for testing:
>>> setup.setSelfVerificationEnabled(True)
>>> analysisservices = setup.bika_analysisservices
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH")
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="Water")
Content Setup
Create some Analysis Services with unique Keywords:
>>> Ca = api.create(analysisservices, "AnalysisService", title="Calcium", Keyword="Ca")
>>> Mg = api.create(analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg")
>>> Cu = api.create(analysisservices, "AnalysisService", title="Copper", Keyword="Cu")
>>> Fe = api.create(analysisservices, "AnalysisService", title="Iron", Keyword="Fe")
>>> Au = api.create(analysisservices, "AnalysisService", title="Aurum", Keyword="Au")
>>> Test1 = api.create(analysisservices, "AnalysisService", title="Calculated Test Service 1", Keyword="Test1")
>>> Test2 = api.create(analysisservices, "AnalysisService", title="Calculated Test Service 2", Keyword="Test2")
Create an new Sample:
>>> sample = new_sample([Cu, Fe, Au])
Get the contained Cu Analysis:
>>> cu = get_analysis(sample, Cu.getKeyword())
>>> fe = get_analysis(sample, Fe.getKeyword())
>>> au = get_analysis(sample, Au.getKeyword())
Check if an object supports snapshots
We can use the support_snapshots function to check if the object supports
snapshots:
>>> supports_snapshots(sample)
True
>>> supports_snapshots(object())
False
Get the snapshot storage
The snapshot storage holds all the raw snapshots in JSON format:
>>> storage = get_storage(sample)
>>> storage
['{...}']
Get all snapshots
To get the data snapshots of an object, we can call get_snapshots:
>>> snapshots = get_snapshots(sample)
>>> snapshots
[{...}]
Check if an object has snapshots
To check if an object has snapshots, we can call has_snapshots:
>>> has_snapshots(sample)
True
>>> has_snapshots(cu)
True
>>> has_snapshots(fe)
True
>>> has_snapshots(au)
True
>>> has_snapshots(setup)
False
Get the number of snapshots
To check the number of snapshots (versions) an object has, we can call
get_snapshot_count:
>>> get_snapshot_count(sample)
1
>>> get_snapshot_count(setup)
0
Get the version of an object
If an object has a snapshot, it is considered as version 0:
If the object does not have any snapshots yet, this function returns -1:
>>> get_version(object())
-1
Get a snapshot by version
Snapshots can be retrieved by their index in the snapshot storage (version):
>>> get_snapshot_by_version(sample, 0)
{...}
Negative versions return None:
>>> get_snapshot_by_version(sample, -1)
Non existing versions return None:
>>> get_snapshot_by_version(sample, 9999)
Get the version of a snapshot
The index (version) of each snapshot can be retrieved:
>>> snap0 = get_snapshot_by_version(sample, 0)
>>> get_snapshot_version(sample, snap0)
0
Non existing versions return -1:
>>> snap1 = get_snapshot_by_version(sample, 1)
>>> get_snapshot_version(sample, snap1)
-1
Get the last snapshot taken
To get the latest snapshot, we can call get_last_snapshot:
>>> last_snap = get_last_snapshot(sample)
>>> get_snapshot_version(sample, last_snap)
0
Take a new Snapshot
Snapshots can be taken programatically with the function take_snapshot:
>>> get_version(sample)
0
Now we take a new snapshot:
>>> snapshot = take_snapshot(sample)
The version should be increased:
>>> get_version(sample)
1
The new snapshot should be the most recent snapshot now:
>>> last_snapshot = get_last_snapshot(sample)
>>> last_snapshot == snapshot
True
Comparing Snapshots
The changes of two snapshots can be compared with compare_snapshots:
>>> snap1 = get_snapshot_by_version(sample, 1)
Add 2 more analyses (Mg and Ca):
>>> sample.edit(Analyses=[Cu, Fe, Au, Mg, Ca])
>>> snap2 = take_snapshot(sample)
Passing the raw=True keyword returns the raw field changes, e.g. in this case,
the field Analyses is a UIDReferenceField which contained initially 3 values
and after adding 2 analyses, 2 UID more references:
>>> diff_raw = compare_snapshots(snap1, snap2, raw=True)
>>> diff_raw["Analyses"]
[([u'...', u'...', u'...'], ['...', '...', '...', '...', '...'])]
It is also possible to process the values to get a more human readable diff:
>>> diff = compare_snapshots(snap1, snap2, raw=False)
>>> diff["Analyses"]
[('Aurum; Copper; Iron', 'Aurum; Calcium; Copper; Iron; Magnesium')]
To directly compare the last two snapshots taken, we can call
compare_last_two_snapshots.
First we edit the sample to get a new snapshot:
>>> sample.edit(CCEmails="rb@ridingbytes.com")
>>> snapshot = take_snapshot(sample)
>>> last_diff = compare_last_two_snapshots(sample, raw=False)
>>> last_diff["CCEmails"]
[('Not set', 'rb@ridingbytes.com')]
Pause and Resume Snapshots
Register event subscribers:
>>> register_event_subscribers()
Pausing the snapshots will disable snapshots for a given object:
>>> pause_snapshots_for(sample)
The object no longer supports snapshots now:
>>> supports_snapshots(sample)
False
Object modification events create then no snapshots anymore:
>>> get_version(sample)
3
>>> get_version(sample)
3
Resuming the snapshots will enable snapshots for a given object:
>>> resume_snapshots_for(sample)
The object supports snapshots again:
>>> supports_snapshots(sample)
True
Object modification events create new snapshots again:
>>> get_version(sample)
4
Unregister event subscribers:
>>> unregister_event_subscribers()
API User
The user API provides a simple interface to control users and groups in SENAITE
Running this test from the buildout directory:
bin/test test_textual_doctests -t API_user
Test Setup
Needed Imports:
>>> from bika.lims import api
>>> from bika.lims.api.user import *
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Environment Setup
Setup the testing environment:
>>> setRoles(portal, TEST_USER_ID, ['LabManager', ])
>>> user = api.get_current_user()
Get user
Get the user object (not the memberdata wrapped user):
>>> current_user = get_user()
>>> current_user
<PloneUser 'test-user'>
This function takes also an optional user argument:
>>> other_user = get_user("admin")
>>> other_user
<PropertiedUser 'admin'>
It can also take the user object:
>>> other_user = get_user(other_user)
>>> other_user
<PropertiedUser 'admin'>
Or a MemberData object:
>>> member = api.get_user(TEST_USER_ID)
>>> member
<Products.PlonePAS.tools.memberdata.MemberData object at ...>
>>> get_user(member)
<PloneUser 'test-user'>
It returns None if the user was not found:
>>> get_user("nonexistant") is None
True
Get user ID
The user ID can be retrieved by the same objects as the get_user
function:
>>> current_user_id = get_user_id()
>>> current_user_id
'test_user_1_'
It takes also the optional user argument:
>>> get_user_id(TEST_USER_ID)
'test_user_1_'
It can also take the user object:
>>> current_user = get_user()
>>> get_user_id(current_user)
'test_user_1_'
If the user was not found, it returns None:
>>> get_user_id("nonexistant") is None
True
Get user groups
This function returns the groups the user belongs to:
>>> get_groups()
['AuthenticatedUsers']
It takes also the optional user argument:
>>> get_groups('admin')
['AuthenticatedUsers']
Get group
This function returns a group object:
>>> get_group('Analysts')
<GroupData at /plone/portal_groupdata/Analysts used for /plone/acl_users/source_groups>
It returns None if the group was not found:
>>> get_group('noexistant') is None
True
If the group is None, all groups are returned:
>>> get_group(None) is None
True
Add group
This function adds users to group(s):
>>> add_group("Analysts")
['AuthenticatedUsers', 'Analysts']
It takes also an optinal user parameter to add another user to a group:
>>> add_group("LabManagers", "admin")
['AuthenticatedUsers', 'LabManagers']
Also adding a user to multiple groups are allowed:
>>> add_group(["Analyst", "Samplers", "Publishers"], "admin")
['Publishers', 'Samplers', 'LabManagers', 'AuthenticatedUsers']
Delete group
This function removes users from group(s):
>>> get_groups()
['AuthenticatedUsers', 'Analysts']
>>> del_group("Analysts")
['AuthenticatedUsers']
Also removing a user from multiple groups is allowed:
>>> get_groups("admin")
['Publishers', 'Samplers', 'LabManagers', 'AuthenticatedUsers']
>>> del_group(["Publishers", "Samplers", "LabManagers"], "admin")
['AuthenticatedUsers']
AR Analyses Field
This field manages Analyses for Analysis Requests.
It is capable to perform the following tasks:
- Create Analyses from Analysis Services
- Delete assigned Analyses
- Update Prices of assigned Analyses
- Update Specifications of assigned Analyses
- Update Interim Fields of assigned Analyses
Running this test from the buildout directory:
bin/test test_textual_doctests -t ARAnalysesField
Test Setup
Imports:
>>> import transaction
>>> from operator import methodcaller
>>> from DateTime import DateTime
>>> from plone import api as ploneapi
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def get_analyses_from(sample, services):
... if not isinstance(services, (list, tuple)):
... services = [services]
... uids = map(api.get_uid, services)
... analyses = sample.getAnalyses(full_objects=True)
... return filter(lambda an: an.getServiceUID() in uids, analyses)
Variables:
>>> date_now = timestamp()
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
>>> calculations = setup.bika_calculations
>>> sampletypes = setup.bika_sampletypes
>>> samplepoints = setup.bika_samplepoints
>>> analysiscategories = setup.bika_analysiscategories
>>> analysisspecs = setup.bika_analysisspecs
>>> analysisservices = setup.bika_analysisservices
>>> labcontacts = setup.bika_labcontacts
>>> worksheets = setup.worksheets
>>> storagelocations = setup.bika_storagelocations
>>> samplingdeviations = setup.bika_samplingdeviations
>>> sampleconditions = setup.bika_sampleconditions
>>> portal_url = portal.absolute_url()
>>> setup_url = portal_url + "/bika_setup"
Test User:
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Prepare Test Environment
Create Client:
>>> clients = self.portal.clients
>>> client = api.create(clients, "Client", Name="Happy Hills", ClientID="HH")
>>> client
<Client at /plone/clients/client-1>
Create some Contact(s):
>>> contact1 = api.create(client, "Contact", Firstname="Client", Surname="One")
>>> contact1
<Contact at /plone/clients/client-1/contact-1>
>>> contact2 = api.create(client, "Contact", Firstname="Client", Surname="Two")
>>> contact2
<Contact at /plone/clients/client-1/contact-2>
Create a Sample Type:
>>> sampletype = api.create(sampletypes, "SampleType", Prefix="water", MinimumVolume="100 ml")
>>> sampletype
<SampleType at /plone/bika_setup/bika_sampletypes/sampletype-1>
Create a Sample Point:
>>> samplepoint = api.create(samplepoints, "SamplePoint", title="Lake Python")
>>> samplepoint
<SamplePoint at /plone/bika_setup/bika_samplepoints/samplepoint-1>
Create an Analysis Category:
>>> analysiscategory = api.create(analysiscategories, "AnalysisCategory", title="Water")
>>> analysiscategory
<AnalysisCategory at /plone/bika_setup/bika_analysiscategories/analysiscategory-1>
Create Analysis Service for PH (Keyword: PH):
>>> analysisservice1 = api.create(analysisservices, "AnalysisService", title="PH", ShortTitle="ph", Category=analysiscategory, Keyword="PH", Price="10")
>>> analysisservice1
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>
Create Analysis Service for Magnesium (Keyword: MG):
>>> analysisservice2 = api.create(analysisservices, "AnalysisService", title="Magnesium", ShortTitle="mg", Category=analysiscategory, Keyword="MG", Price="20")
>>> analysisservice2
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-2>
Create Analysis Service for Calcium (Keyword: CA):
>>> analysisservice3 = api.create(analysisservices, "AnalysisService", title="Calcium", ShortTitle="ca", Category=analysiscategory, Keyword="CA", Price="30")
>>> analysisservice3
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-3>
Create Analysis Service for Total Hardness (Keyword: THCaCO3):
>>> analysisservice4 = api.create(analysisservices, "AnalysisService", title="Total Hardness", ShortTitle="Tot. Hard", Category=analysiscategory, Keyword="THCaCO3", Price="40")
>>> analysisservice4
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-4>
Create Analysis Service w/o calculation (Keyword: NOCALC):
>>> analysisservice5 = api.create(analysisservices, "AnalysisService", title="No Calculation", ShortTitle="nocalc", Category=analysiscategory, Keyword="NoCalc", Price="50")
>>> analysisservice5
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-5>
Create some Calculations with Formulas referencing existing AS keywords:
>>> calc1 = api.create(calculations, "Calculation", title="Round")
>>> calc1.setFormula("round(12345, 2)")
>>> calc2 = api.create(calculations, "Calculation", title="A in ppt")
>>> calc2.setFormula("[A] * 1000")
>>> calc3 = api.create(calculations, "Calculation", title="B in ppt")
>>> calc3.setFormula("[B] * 1000")
>>> calc4 = api.create(calculations, "Calculation", title="Total Hardness")
>>> calc4.setFormula("[CA] + [MG]")
Assign the calculations to the Analysis Services:
>>> analysisservice1.setCalculation(calc1)
>>> analysisservice2.setCalculation(calc2)
>>> analysisservice3.setCalculation(calc3)
>>> analysisservice4.setCalculation(calc4)
Create an Analysis Specification for Water:
>>> sampletype_uid = api.get_uid(sampletype)
>>> rr1 = {"keyword": "PH", "min": 5, "max": 7, "error": 10, "hidemin": "", "hidemax": "", "rangecomment": "Lab PH Spec"}
>>> rr2 = {"keyword": "MG", "min": 5, "max": 7, "error": 10, "hidemin": "", "hidemax": "", "rangecomment": "Lab MG Spec"}
>>> rr3 = {"keyword": "CA", "min": 5, "max": 7, "error": 10, "hidemin": "", "hidemax": "", "rangecomment": "Lab CA Spec"}
>>> rr = [rr1, rr2, rr3]
>>> analysisspec1 = api.create(analysisspecs, "AnalysisSpec", title="Lab Water Spec", SampleType=sampletype_uid, ResultsRange=rr)
Create an Analysis Request:
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact1.UID(),
... 'CContact': contact2.UID(),
... 'SamplingDate': date_now,
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID(),
... 'Priority': '1',
... }
>>> service_uids = [analysisservice1.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar
<AnalysisRequest at /plone/clients/client-1/water-0001>
ARAnalysesField
This field maintains Analyses within AnalysesRequests:
>>> field = ar.getField("Analyses")
>>> field.type
'analyses'
>>> from bika.lims.interfaces import IARAnalysesField
>>> IARAnalysesField.providedBy(field)
True
Getting Analyses
The get method returns a list of assined analyses brains:
>>> field.get(ar)
[<Products.ZCatalog.Catalog.mybrains object at ...>]
The full objects can be obtained by passing in full_objects=True:
>>> field.get(ar, full_objects=True)
[<Analysis at /plone/clients/client-1/water-0001/PH>]
The analysis PH is now contained in the AR:
>>> ar.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/water-0001/PH>]
Setting Analyses
The set method returns a list of new created analyses.
The field takes the following parameters:
- items is a list that contains the items to be set:
- The list can contain Analysis objects/brains, AnalysisService
objects/brains and/or Analysis Service uids.
- prices is a dictionary:
- key = AnalysisService UID
value = price
- specs is a list of dictionaries:
- key = AnalysisService UID
value = dictionary: defined in ResultsRange field definition
Pass in all prior created Analysis Services:
>>> all_services = [analysisservice1, analysisservice2, analysisservice3]
>>> field.set(ar, all_services)
We expect to have now the CA and MG Analyses as well:
>>> sorted(ar.objectValues("Analysis"), key=methodcaller('getId'))
[<Analysis at /plone/clients/client-1/water-0001/CA>, <Analysis at /plone/clients/client-1/water-0001/MG>, <Analysis at /plone/clients/client-1/water-0001/PH>]
Removing Analyses is done by omitting those from the items list:
>>> field.set(ar, [analysisservice1])
Now there should be again only one Analysis assigned:
>>> len(ar.objectValues("Analysis"))
1
We expect to have just the PH Analysis again:
>>> ar.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/water-0001/PH>]
The field can also handle UIDs of Analyses Services:
>>> service_uids = map(api.get_uid, all_services)
>>> field.set(ar, service_uids)
We expect again to have all the three Analyses:
>>> sorted(ar.objectValues("Analysis"), key=methodcaller("getId"))
[<Analysis at /plone/clients/client-1/water-0001/CA>, <Analysis at /plone/clients/client-1/water-0001/MG>, <Analysis at /plone/clients/client-1/water-0001/PH>]
The field should also handle catalog brains:
>>> brains = api.search({"portal_type": "AnalysisService", "getKeyword": "CA"})
>>> brains
[<Products.ZCatalog.Catalog.mybrains object at 0x...>]
>>> brain = brains[0]
>>> api.get_title(brain)
'Calcium'
>>> field.set(ar, [brain])
We expect now to have just the CA analysis assigned:
>>> ar.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/water-0001/CA>]
Now let’s try int mixed, one catalog brain and one object:
>>> field.set(ar, [analysisservice1, brain])
We expect now to have now PH and CA:
>>> sorted(ar.objectValues("Analysis"), key=methodcaller("getId"))
[<Analysis at /plone/clients/client-1/water-0001/CA>, <Analysis at /plone/clients/client-1/water-0001/PH>]
Finally, we test it with an Analysis object:
>>> analysis1 = ar["PH"]
>>> field.set(ar, [analysis1])
>>> sorted(ar.objectValues("Analysis"), key=methodcaller("getId"))
[<Analysis at /plone/clients/client-1/water-0001/PH>]
Setting Analysis Specifications
Specifications are defined on the ResultsRange field of an Analysis Request.
It is a dictionary with the following keys and values:
- keyword: The Keyword of the Analysis Service
- min: The minimum allowed value
- max: The maximum allowed value
- error: The error percentage
- hidemin: ?
- hidemax: ?
- rangecomment: ?
Each Analysis can request its own Specification (Result Range):
>>> field.set(ar, all_services)
>>> analysis1 = ar[analysisservice1.getKeyword()]
>>> analysis2 = ar[analysisservice2.getKeyword()]
>>> analysis3 = ar[analysisservice3.getKeyword()]
Now we will set the analyses with custom specifications through the
ARAnalysesField. This should set the custom Specifications on the Analysis
Request and have precedence over the lab specifications:
>>> spec_min = 5.5
>>> spec_max = 7.5
>>> error = 5
>>> arr1 = {"keyword": "PH", "min": 5.5, "max": 7.5, "error": 5, "hidemin": "", "hidemax": "", "rangecomment": "My PH Spec"}
>>> arr2 = {"keyword": "MG", "min": 5.5, "max": 7.5, "error": 5, "hidemin": "", "hidemax": "", "rangecomment": "My MG Spec"}
>>> arr3 = {"keyword": "CA", "min": 5.5, "max": 7.5, "error": 5, "hidemin": "", "hidemax": "", "rangecomment": "My CA Spec"}
>>> arr = [arr1, arr2, arr3]
>>> all_analyses = [analysis1, analysis2, analysis3]
>>> field.set(ar, all_analyses, specs=arr)
>>> myspec1 = analysis1.getResultsRange()
>>> myspec1.get("rangecomment")
'My PH Spec'
>>> myspec2 = analysis2.getResultsRange()
>>> myspec2.get("rangecomment")
'My MG Spec'
>>> myspec3 = analysis3.getResultsRange()
>>> myspec3.get("rangecomment")
'My CA Spec'
Result Ranges are set to analyses level, but not present in the AR:
>>> sorted(map(lambda r: r.get("rangecomment"), ar.getResultsRange()))
[]
Now we simulate the form input data of the ARs “Manage Analysis” form, so that
the User only selected the PH service and gave some custom specifications for
this Analysis.
The specifications get applied if the keyword matches:
>>> ph_specs = {"keyword": analysis1.getKeyword(), "min": 5.2, "max": 7.9, "error": 3}
>>> field.set(ar, [analysis1], specs=[ph_specs])
We expect to have now just one Analysis set:
>>> analyses = field.get(ar, full_objects=True)
>>> analyses
[<Analysis at /plone/clients/client-1/water-0001/PH>]
And the specification should be according to the values we have set
>>> ph = analyses[0]
>>> phspec = ph.getResultsRange()
>>> phspec.get("min")
5.2
>>> phspec.get("max")
7.9
>>> phspec.get("error")
3
Setting Analyses Prices
Prices are primarily defined on Analyses Services:
>>> analysisservice1.getPrice()
'10.00'
>>> analysisservice2.getPrice()
'20.00'
>>> analysisservice3.getPrice()
'30.00'
Created Analyses inherit that price:
>>> field.set(ar, all_services)
>>> analysis1 = ar[analysisservice1.getKeyword()]
>>> analysis2 = ar[analysisservice2.getKeyword()]
>>> analysis3 = ar[analysisservice3.getKeyword()]
>>> analysis1.getPrice()
'10.00'
>>> analysis2.getPrice()
'20.00'
>>> analysis3.getPrice()
'30.00'
The setter also allows to set custom prices for the Analyses:
>>> prices = {
... analysisservice1.UID(): "100",
... analysisservice2.UID(): "200",
... analysisservice3.UID(): "300",
... }
Now we set the field with all analyses services and new prices:
>>> field.set(ar, all_services, prices=prices)
The Analyses have now the new prices:
>>> analysis1.getPrice()
'100.00'
>>> analysis2.getPrice()
'200.00'
>>> analysis3.getPrice()
'300.00'
The Services should retain the old prices:
>>> analysisservice1.getPrice()
'10.00'
>>> analysisservice2.getPrice()
'20.00'
>>> analysisservice3.getPrice()
'30.00'
Calculations and Interim Fields
When an Analysis is assigned to a Sample, it inherits its Calculation and Interim Fields.
Create some interim fields:
>>> interim1 = {"keyword": "A", "title": "Interim A", "value": 1, "hidden": False, "type": "int", "unit": "x"}
>>> interim2 = {"keyword": "B", "title": "Interim B", "value": 2, "hidden": False, "type": "int", "unit": "x"}
>>> interim3 = {"keyword": "C", "title": "Interim C", "value": 3, "hidden": False, "type": "int", "unit": "x"}
>>> interim4 = {"keyword": "D", "title": "Interim D", "value": 4, "hidden": False, "type": "int", "unit": "x"}
Append interim field A to the Total Hardness Calculation:
>>> calc4.setInterimFields([interim1])
>>> map(lambda x: x["keyword"], calc4.getInterimFields())
['A']
Append interim field B to the Total Hardness Analysis Service:
>>> analysisservice4.setInterimFields([interim2])
>>> map(lambda x: x["keyword"], analysisservice4.getInterimFields())
['B']
Now we assign the Total Hardness Analysis Service:
>>> field.set(ar, [analysisservice4])
>>> new_analyses = get_analyses_from(ar, analysisservice4)
>>> analysis = new_analyses[0]
>>> analysis
<Analysis at /plone/clients/client-1/water-0001/THCaCO3>
The created Analysis has the same Calculation attached, as the Analysis Service:
>>> analysis_calc = analysis.getCalculation()
>>> analysis_calc
<Calculation at /plone/bika_setup/bika_calculations/calculation-4>
And therefore, also the same Interim Fields as the Calculation:
>>> map(lambda x: x["keyword"], analysis_calc.getInterimFields())
['A']
The Analysis also inherits the Interim Fields of the Analysis Service:
>>> map(lambda x: x["keyword"], analysis.getInterimFields())
['B', 'A']
But what happens if the Interim Fields of either the Analysis Service or of the
Calculation change and the AR is updated with the same Analysis Service?
Change the Interim Field of the Calculation to C:
>>> calc4.setInterimFields([interim3])
>>> map(lambda x: x["keyword"], calc4.getInterimFields())
['C']
Change the Interim Fields of the Analysis Service to D:
>>> analysisservice4.setInterimFields([interim4])
The Analysis Service returns only local interim fields:
>>> map(lambda x: x["keyword"], analysisservice4.getInterimFields())
['D']
Update the AR with the new Analysis Service:
>>> field.set(ar, [analysisservice4])
The Analysis should be still there:
>>> analysis = ar[analysisservice4.getKeyword()]
>>> analysis
<Analysis at /plone/clients/client-1/water-0001/THCaCO3>
The calculation should be still there:
>>> analysis_calc = analysis.getCalculation()
>>> analysis_calc
<Calculation at /plone/bika_setup/bika_calculations/calculation-4>
And therefore, also the same Interim Fields as the Calculation:
>>> map(lambda x: x["keyword"], analysis_calc.getInterimFields())
['C']
The existing Analysis retains the initial Interim Fields of the Analysis
Service, together with the interim from the associated Calculation:
>>> map(lambda x: x["keyword"], analysis.getInterimFields())
['B', 'A']
Worksheets
If the an Analysis is assigned to a worksheet, it should be detached before it
is removed from an Analysis Request.
Assign the PH Analysis:
>>> field.set(ar, [analysisservice1])
>>> new_analyses = ar.getAnalyses(full_objects=True)
Create a new Worksheet and assign the Analysis to it:
>>> ws = api.create(worksheets, "Worksheet", "WS")
>>> analysis = new_analyses[0]
>>> ws.addAnalysis(analysis)
The analysis is not associated to the Worksheet because the AR is not received:
>>> analysis.getWorksheet() is None
True
>>> ws.getAnalyses()
[]
>>> success = do_action_for(ar, "receive")
>>> api.get_workflow_status_of(ar)
'sample_received'
Try to assign again the Analysis to the Worksheet:
>>> ws.addAnalysis(analysis)
The analysis is associated to the Worksheet:
>>> analysis.getWorksheet().UID() == ws.UID()
True
The worksheet contains now the Analysis:
>>> ws.getAnalyses()
[<Analysis at /plone/clients/client-1/water-0001/PH>]
Removing the analysis from the AR also unassignes it from the worksheet:
>>> field.set(ar, [analysisservice2])
Dependencies
The Analysis Service Total Hardness uses the Total Hardness Calculation:
>>> analysisservice4.getCalculation()
<Calculation at /plone/bika_setup/bika_calculations/calculation-4>
The Calculation is dependent on the CA and MG Services through its Formula:
>>> analysisservice4.getCalculation().getFormula()
'[CA] + [MG]'
Get the dependent services:
>>> sorted(analysisservice4.getServiceDependencies(), key=methodcaller('getId'))
[<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-2>, <AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-3>]
We expect that dependent services get automatically set:
>>> field.set(ar, [analysisservice4])
>>> sorted(ar.objectValues("Analysis"), key=methodcaller('getId'))
[<Analysis at /plone/clients/client-1/water-0001/CA>, <Analysis at /plone/clients/client-1/water-0001/MG>, <Analysis at /plone/clients/client-1/water-0001/THCaCO3>]
Attachments
Attachments can be assigned to the Analysis Request or to individual Analyses.
If an attachment was assigned to a specific analysis, it must be deleted if the
Analysis was removed, see https://github.com/senaite/senaite.core/issues/1025.
Hoever, for invalidated/retested ARs the attachments are linked to the original
AR/Analyses as well as to the retested AR/Analyses. Therefore, it must be
retained when it is still referenced.
Create a new AR and assign the PH analysis:
>>> service_uids = [analysisservice1.UID()]
>>> ar2 = create_analysisrequest(client, request, values, service_uids)
>>> ar2
<AnalysisRequest at /plone/clients/client-1/water-0002>
Get the analysis:
>>> an1 = ar2[analysisservice1.getKeyword()]
>>> an1
<Analysis at /plone/clients/client-1/water-0002/PH>
It should have no attachments assigned:
>>> an1.getAttachment()
[]
We create a new attachment in the client and assign it to this specific analysis:
>>> att1 = api.create(ar2.getClient(), "Attachment", title="PH.png")
>>> an1.setAttachment(att1)
>>> an1.getAttachment()
[<Attachment at /plone/clients/client-1/attachment-1>]
Now we remove the PH analysis. Since it is prohibited by the field to remove
all analyses from an AR, we will set here some other analyses instead:
>>> field.set(ar2, [analysisservice2, analysisservice3])
The attachment should be deleted from the client folder as well:
>>> att1.getId() in ar2.getClient().objectIds()
False
Re-adding the PH analysis should start with no attachments:
>>> field.set(ar2, [analysisservice1, analysisservice2, analysisservice3])
>>> an1 = ar2[analysisservice1.getKeyword()]
>>> an1.getAttachment()
[]
This should work as well when multiple attachments are assigned.
>>> field.set(ar2, [analysisservice1, analysisservice2])
>>> an1 = ar2[analysisservice1.getKeyword()]
>>> an2 = ar2[analysisservice2.getKeyword()]
>>> att2 = api.create(ar2.getClient(), "Attachment", title="test2.png")
>>> att3 = api.create(ar2.getClient(), "Attachment", title="test3.png")
>>> att4 = api.create(ar2.getClient(), "Attachment", title="test4.png")
>>> att5 = api.create(ar2.getClient(), "Attachment", title="test5.png")
>>> att6 = api.create(ar2.getClient(), "Attachment", title="test6.png")
>>> att7 = api.create(ar2.getClient(), "Attachment", title="test7.png")
Assign the first half of the attachments to the PH analysis:
>>> an1.setAttachment([att2, att3, att4])
>>> an1.getAttachment()
[<Attachment at /plone/clients/client-1/attachment-2>, <Attachment at /plone/clients/client-1/attachment-3>, <Attachment at /plone/clients/client-1/attachment-4>]
Assign the second half of the attachments to the Magnesium analysis:
>>> an2.setAttachment([att5, att6, att7])
>>> an2.getAttachment()
[<Attachment at /plone/clients/client-1/attachment-5>, <Attachment at /plone/clients/client-1/attachment-6>, <Attachment at /plone/clients/client-1/attachment-7>]
Removing the PH analysis should also remove all the assigned attachments:
>>> field.set(ar2, [analysisservice2])
>>> att2.getId() in ar2.getClient().objectIds()
False
>>> att3.getId() in ar2.getClient().objectIds()
False
>>> att4.getId() in ar2.getClient().objectIds()
False
The attachments of Magnesium should be still there:
>>> att5.getId() in ar2.getClient().objectIds()
True
>>> att6.getId() in ar2.getClient().objectIds()
True
>>> att7.getId() in ar2.getClient().objectIds()
True
Attachments linked to multiple ARs/ANs
When an AR is invalidated, a copy of it get created for retesting. This copy
holds also the Attachments as references.
Create a new AR for that and assign a service w/o caclucation:
>>> service_uids = [analysisservice5.UID()]
>>> ar3 = create_analysisrequest(client, request, values, service_uids)
>>> ar3
<AnalysisRequest at /plone/clients/client-1/water-0003>
Receive the AR:
>>> transitioned = do_action_for(ar3, "receive")
>>> transitioned[0]
True
>>> ar3.portal_workflow.getInfoFor(ar3, "review_state")
'sample_received'
Assign an attachment to the AR:
>>> att_ar = api.create(ar3.getClient(), "Attachment", title="ar.png")
>>> ar3.setAttachment(att_ar)
>>> ar3.getAttachment()
[<Attachment at /plone/clients/client-1/attachment-8>]
Assign an attachment to the Analysis:
>>> an = ar3[analysisservice5.getKeyword()]
>>> att_an = api.create(ar3.getClient(), "Attachment", title="an.png")
>>> an.setAttachment(att_an)
>>> an.getAttachment()
[<Attachment at /plone/clients/client-1/attachment-9>]
Set the results of the Analysis and submit and verify them directly.
Therefore, self-verification must be allowed in the setup:
>>> setup.setSelfVerificationEnabled(True)
>>> for analysis in ar3.getAnalyses(full_objects=True):
... analysis.setResult("12")
... transitioned = do_action_for(analysis, "submit")
... transitioned = do_action_for(analysis, "verify")
Finally we can publish the AR:
>>> transitioned = do_action_for(ar3, "publish")
And invalidate it directly:
>>> transitioned = do_action_for(ar3, "invalidate")
A new AR is automatically created for retesting:
>>> ar_retest = ar3.getRetest()
>>> ar_retest
<AnalysisRequest at /plone/clients/client-1/water-0003-R01>
>>> an_retest = ar3.getRetest()[analysisservice5.getKeyword()]
>>> an_retest
<Analysis at /plone/clients/client-1/water-0003-R01/NoCalc>
However, this retest AR references the same Attachments as the original AR:
>>> ar_retest.getAttachment() == ar3.getAttachment()
True
>>> att_ar.getLinkedRequests()
[<AnalysisRequest at /plone/clients/client-1/water-0003-R01>, <AnalysisRequest at /plone/clients/client-1/water-0003>]
>>> att_ar.getLinkedAnalyses()
[]
And all contained Analyses of the retest keep references to the same Attachments:
>>> an_retest.getAttachment() == an.getAttachment()
True
>>> att_an.getLinkedRequests()
[]
>>> att_an.getLinkedAnalyses()
[<Analysis at /plone/clients/client-1/water-0003/NoCalc>, <Analysis at /plone/clients/client-1/water-0003-R01/NoCalc>]
This means that removing that attachment from the retest should not delete
the attachment from the original AR:
>>> field.set(ar_retest, [analysisservice1])
>>> an.getAttachment()
[<Attachment at /plone/clients/client-1/attachment-9>]
>>> att_an.getId() in ar3.getClient().objectIds()
True
And the attachment is now only linked to the attachment of the original analysis:
>>> att_an.getLinkedAnalyses()
[<Analysis at /plone/clients/client-1/water-0003/NoCalc>]
AR Analyses Field when using Partitions
The setter of the ARAnalysesField takes descendants (partitions) and ancestors
from the current instance into account to prevent inconsistencies: In a Sample
lineage analyses from a node are always masked by same analyses in leaves. This
can drive to inconsistencies and therefore, there is the need to keep the tree
without duplicates.
Running this test from the buildout directory:
bin/test test_textual_doctests -t ARAnalysesFieldWithPartitions
Test Setup
Needed imports:
>>> import transaction
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.utils.analysisrequest import create_partition
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from zope.interface import alsoProvides
>>> from zope.interface import noLongerProvides
Functional Helpers:
>>> def new_sample(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': DateTime().strftime("%Y-%m-%d"),
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def get_analysis_from(sample, service):
... service_uid = api.get_uid(service)
... for analysis in sample.getAnalyses(full_objects=True):
... if analysis.getServiceUID() == service_uid:
... return analysis
... return None
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = api.get_setup()
Create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(setup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> Mg = api.create(setup.bika_analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg", Price="20", Category=category.UID())
>>> Ca = api.create(setup.bika_analysisservices, "AnalysisService", title="Calcium", Keyword="Ca", Price="20", Category=category.UID())
>>> THCaCO3 = api.create(setup.bika_analysisservices, "AnalysisService", title="Calcium", Keyword="THCaCO3", Price="20", Category=category.UID())
>>> calc = api.create(setup.bika_calculations, "Calculation", title="Total Hardness")
>>> calc.setFormula("[Ca] + [Mg]")
>>> THCaCO3.setCalculation(calc)
Creation of a Sample with a Partition
Create a Sample and receive:
>>> sample = new_sample([Cu, Fe])
Create a Partition containing of the Sample, containing the analysis Cu:
>>> cu = get_analysis_from(sample, Cu)
>>> partition = create_partition(sample, request, [cu])
The analysis ‘Cu’ lives in the partition:
>>> cu = get_analysis_from(partition, Cu)
>>> api.get_parent(cu) == partition
True
Although is also returned by the primary:
>>> cu = get_analysis_from(sample, Cu)
>>> api.get_parent(cu) == partition
True
>>> api.get_parent(cu) == sample
False
Analyses retrieval
Get the ARAnalysesField to play with:
>>> field = sample.getField("Analyses")
get_from_instance
When asked for Fe when the primary is given, it returns the analysis, cause
it lives in the primary:
>>> fe = field.get_from_instance(sample, Fe)[0]
>>> fe.getServiceUID() == api.get_uid(Fe)
True
But when asked for Cu when the primary is given, it returns empty, cause it
lives in the partition:
>>> field.get_from_instance(sample, Cu)
[]
While it returns the analysis when the partition is used:
>>> cu = field.get_from_instance(partition, Cu)[0]
>>> cu.getServiceUID() == api.get_uid(Cu)
True
But when asking the partition for Fe it returns empty, cause it lives in the
ancestor:
>>> field.get_from_instance(partition, Fe)
[]
get_from_ancestor
When asked for Fe to primary, it returns empty because there is no ancestor
containing Fe:
>>> field.get_from_ancestor(sample, Fe)
[]
But when asked for Fe to the partition, it returns the analysis, cause it
it lives in an ancestor from the partition:
>>> fe = field.get_from_ancestor(partition, Fe)[0]
>>> fe.getServiceUID() == api.get_uid(Fe)
True
If I ask for Cu, that lives in the partition, it will return empty for both:
>>> field.get_from_ancestor(sample, Cu)
[]
>>> field.get_from_ancestor(partition, Cu)
[]
get_from_descendant
When asked for Fe to primary, it returns None because there is no descendant
containing Fe:
>>> field.get_from_descendant(sample, Fe)
[]
And same with partition:
>>> field.get_from_descendant(partition, Fe)
[]
When asked for Cu to primary, it returns the analysis, because it lives in a
descendant (partition):
>>> field.get_from_descendant(sample, Cu)
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>]
But returns None if I ask to the partition:
>>> field.get_from_descendant(partition, Cu)
[]
get_analyses_from_descendants
It returns the analyses contained by the descendants:
>>> field.get_analyses_from_descendants(sample)
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>]
>>> field.get_analyses_from_descendants(partition)
[]
Resolution of analyses from the Sample lineage
resolve_analyses
Resolves the analysis from the sample lineage if exists:
>>> field.resolve_analyses(sample, Fe)
[<Analysis at /plone/clients/client-1/W-0001/Fe>]
>>> field.resolve_analyses(sample, Cu)
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>]
>>> field.resolve_analyses(sample, Au)
[]
But when we use the partition and the analysis is found in an ancestor, it
moves the analysis into the partition:
>>> field.resolve_analyses(partition, Fe)
[<Analysis at /plone/clients/client-1/W-0001-P01/Fe>]
>>> sample.objectValues("Analysis")
[]
>>> partition.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>, <Analysis at /plone/clients/client-1/W-0001-P01/Fe>]
Addition of analyses
add_analysis
If we try to add now an analysis that already exists, either in the partition or
in the primary, the analysis won’t be added:
>>> field.add_analysis(sample, Fe)
>>> sample.objectValues("Analysis")
[]
>>> field.add_analysis(partition, Fe)
>>> partition.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>, <Analysis at /plone/clients/client-1/W-0001-P01/Fe>]
If we add a new analysis, this will be added in the sample we are working with:
>>> field.add_analysis(sample, Au)
>>> sample.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001/Au>]
>>> partition.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>, <Analysis at /plone/clients/client-1/W-0001-P01/Fe>]
Apply the changes:
If I try to add an analysis that exists in an ancestor, the analysis gets moved
while the function returns None:
>>> field.add_analysis(partition, Au)
>>> sample.objectValues("Analysis")
[]
>>> partition.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>, <Analysis at /plone/clients/client-1/W-0001-P01/Fe>, <Analysis at /plone/clients/client-1/W-0001-P01/Au>]
Set analyses
If we try to set same analyses as before to the root sample, nothing happens
because the analyses are already there:
>>> field.set(sample, [Cu, Fe, Au])
The analyses still belong to the partition though:
>>> sample.objectValues("Analysis")
[]
>>> partition.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>, <Analysis at /plone/clients/client-1/W-0001-P01/Fe>, <Analysis at /plone/clients/client-1/W-0001-P01/Au>]
Same result if I set the analyses to the partition:
>>> field.set(partition, [Cu, Fe, Au])
>>> sample.objectValues("Analysis")
[]
>>> partition.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>, <Analysis at /plone/clients/client-1/W-0001-P01/Fe>, <Analysis at /plone/clients/client-1/W-0001-P01/Au>]
If I add a new analysis in the list, the analysis is successfully added:
>>> field.set(sample, [Cu, Fe, Au, Mg])
>>> sample.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001/Mg>]
And the partition keeps its own analyses:
>>> partition.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>, <Analysis at /plone/clients/client-1/W-0001-P01/Fe>, <Analysis at /plone/clients/client-1/W-0001-P01/Au>]
Apply the changes:
If I set the same analyses to the partition, the Mg analysis is moved into
the partition:
>>> field.set(partition, [Cu, Fe, Au, Mg])
>>> sample.objectValues("Analysis")
[]
>>> partition.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>, <Analysis at /plone/clients/client-1/W-0001-P01/Fe>, <Analysis at /plone/clients/client-1/W-0001-P01/Au>, <Analysis at /plone/clients/client-1/W-0001-P01/Mg>]
To remove Mg analysis, pass the list without Mg:
>>> field.set(sample, [Cu, Fe, Au])
The analysis Mg has been removed, although it belonged to the partition:
>>> sample.objectValues("Analysis")
[]
>>> partition.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>, <Analysis at /plone/clients/client-1/W-0001-P01/Fe>, <Analysis at /plone/clients/client-1/W-0001-P01/Au>]
But if I add a new analysis to the primary and I try to remove it from the
partition, nothing will happen:
>>> field.set(sample, [Cu, Fe, Au, Mg])
>>> field.set(partition, [Cu, Fe, Au])
>>> sample.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001/Mg>]
>>> partition.objectValues("Analysis")
[<Analysis at /plone/clients/client-1/W-0001-P01/Cu>, <Analysis at /plone/clients/client-1/W-0001-P01/Fe>, <Analysis at /plone/clients/client-1/W-0001-P01/Au>]
Test calculation when dependant service assigned to a partition subsample:
Create a Sample and receive:
>>> sample2 = new_sample([Ca, Mg, THCaCO3])
Create a Partition containing of the Sample, containing the analysis Ca:
>>> ca = get_analysis_from(sample2, Ca)
>>> partition2 = create_partition(sample, request, [ca])
Set result values to analysis (Ca, Mg)
>>> analyses = sample2.getAnalyses(full_objects=True)
>>> ca_analysis = filter(lambda an: an.getKeyword()=="Ca", analyses)[0]
>>> mg_analysis = filter(lambda an: an.getKeyword()=="Mg", analyses)[0]
>>> ca_analysis.setResult(10)
>>> mg_analysis.setResult(10)
- Calculate dependant result and make sure it’s correct:
>>> th_analysis = filter(lambda an: an.getKeyword()=="THCaCO3", analyses)[0]
>>> th_analysis.calculateResult()
True
>>> th_analysis.getResult()
'20.0'
Abbott’s m2000 Real Time import interface
Running this test from the buildout directory:
bin/test test_textual_doctests -t AbbottM2000rtImportInterface
Test Setup
Needed imports:
~~ code:
>>> import codecs
>>> import os
>>> import transaction
>>> from DateTime import DateTime
>>> from Products.CMFCore.utils import getToolByName
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from senaite.core.exportimport import instruments
>>> from senaite.core.exportimport.instruments.abbott.m2000rt.m2000rt import Abbottm2000rtTSVParser
>>> from senaite.core.exportimport.instruments.abbott.m2000rt.m2000rt import Abbottm2000rtImporter
>>> from senaite.core.exportimport.auto_import_results import UploadFileWrapper
Functional helpers:
~~ code:
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
Variables:
~~ code:
>>> date_now = timestamp()
>>> portal = self.portal
>>> request = self.request
>>> bika_setup = portal.bika_setup
>>> bika_instruments = bika_setup.bika_instruments
>>> bika_sampletypes = bika_setup.bika_sampletypes
>>> bika_samplepoints = bika_setup.bika_samplepoints
>>> bika_analysiscategories = bika_setup.bika_analysiscategories
>>> bika_analysisservices = bika_setup.bika_analysisservices
>>> bika_calculations = bika_setup.bika_calculations
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager:
~~ code:
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Availability of instrument interface
Check that the instrument interface is available:
~~ code:
>>> exims = []
>>> for exim_id in instruments.__all__:
... exims.append(exim_id)
>>> 'abbott.m2000rt.m2000rt' in exims
True
Assigning the Import Interface to an Instrument
Create an Instrument and assign to it the tested Import Interface:
~~ code:
>>> instrument = api.create(bika_instruments, "Instrument", title="Instrument-1")
>>> instrument
<Instrument at /plone/bika_setup/bika_instruments/instrument-1>
>>> instrument.setImportDataInterface(['abbott.m2000rt.m2000rt'])
>>> instrument.getImportDataInterface()
['abbott.m2000rt.m2000rt']
Import test
Required steps: Create and receive Analysis Request for import test
An AnalysisRequest can only be created inside a Client, and it also requires a Contact and
a SampleType:
~~ code:
>>> clients = self.portal.clients
>>> client = api.create(clients, "Client", Name="NARALABS", ClientID="NLABS")
>>> client
<Client at /plone/clients/client-1>
>>> contact = api.create(client, "Contact", Firstname="Juan", Surname="Gallostra")
>>> contact
<Contact at /plone/clients/client-1/contact-1>
>>> sampletype = api.create(bika_sampletypes, "SampleType", Prefix="H2O", MinimumVolume="100 ml")
>>> sampletype
<SampleType at /plone/bika_setup/bika_sampletypes/sampletype-1>
Create an AnalysisCategory (which categorizes different AnalysisServices), and add to it an AnalysisService.
This service matches the service specified in the file from which the import will be performed:
~~ code:
>>> analysiscategory = api.create(bika_analysiscategories, "AnalysisCategory", title="Water")
>>> analysiscategory
<AnalysisCategory at /plone/bika_setup/bika_analysiscategories/analysiscategory-1>
>>> analysisservice = api.create(bika_analysisservices,
... "AnalysisService",
... title="HIV06ml",
... ShortTitle="hiv06",
... Category=analysiscategory,
... Keyword="HIV06ml")
>>> analysisservice
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>
>>> total_calc = api.create(bika_calculations, 'Calculation', title='TotalCalc')
>>> total_calc.setFormula('[HIV06ml] * 100')
>>> analysisservice2 = api.create(bika_analysisservices,
... "AnalysisService",
... title="Test Total Results",
... ShortTitle="TestTotalResults",
... Category=analysiscategory,
... Keyword="TTR")
>>> analysisservice2.setUseDefaultCalculation(False)
>>> analysisservice2.setCalculation(total_calc)
>>> analysisservice2
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-2>
Set some interim fields present in the results test file intoto the created
AnalysisService, so not on the second server:
~~ code:
>>> service_interim_fields = [{'keyword': 'ASRExpDate',
... 'title': 'ASRExpDate',
... 'unit': '',
... 'default': ''},
... {'keyword': 'ASRLotNumber',
... 'title': 'ASRLotNumber',
... 'unit': '',
... 'default': ''},
... {'keyword': 'AssayCalibrationTime',
... 'title': 'AssayCalibrationTime',
... 'unit': '',
... 'default': ''},
... {'keyword': 'FinalResult',
... 'title': 'FinalResult',
... 'unit': '',
... 'default': ''},
... {'keyword': 'Location',
... 'title': 'Location',
... 'unit': '',
... 'default': ''},
... ]
>>> analysisservice.setInterimFields(service_interim_fields)
>>> interims = analysisservice.getInterimFields()
>>> map(lambda i: i.get("keyword"), interims)
['ASRExpDate', 'ASRLotNumber', 'AssayCalibrationTime', 'FinalResult', 'Location']
Create an AnalysisRequest with this AnalysisService and receive it:
~~ code:
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': date_now,
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()
... }
>>> service_uids = [analysisservice.UID(), analysisservice2.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar
<AnalysisRequest at /plone/clients/client-1/H2O-0001>
>>> ar.getReceivedBy()
''
>>> wf = getToolByName(ar, 'portal_workflow')
>>> wf.doActionFor(ar, 'receive')
>>> ar.getReceivedBy()
'test_user_1_'
Import test
Load results test file and import the results:
~~ code:
>>> dir_path = os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', 'files'))
>>> temp_file = codecs.open(dir_path + '/AbbottM2000.log.123',
... encoding='utf-8-sig')
>>> test_file = UploadFileWrapper(temp_file)
>>> abbott_parser = Abbottm2000rtTSVParser(test_file)
>>> importer = Abbottm2000rtImporter(parser=abbott_parser,
... context=portal,
... allowed_ar_states=['sample_received', 'to_be_verified'],
... allowed_analysis_states=None,
... override=[True, True])
>>> importer.process()
Check from the importer logs that the file from where the results have been imported is indeed
the specified file:
~~ code:
>>> '/AbbottM2000.log.123' in importer.logs[0]
True
Check the rest of the importer logs to verify that the values were correctly imported:
~~ code:
>>> importer.logs[-1]
'Import finished successfully: 1 Samples and 1 results updated'
And finally check if indeed the analysis has the imported results:
~~ code:
>>> analyses = ar.getAnalyses()
>>> an = [analysis.getObject() for analysis in analyses if analysis.Title == 'HIV06ml'][0]
>>> an.getResult()
'18'
>>> an = [analysis.getObject() for analysis in analyses if analysis.Title == 'Test Total Results'][0]
>>> an.getResult()
'1800.0'
Action Handler Pool
The ActionHandlerPool is a singleton instance to increase performance by
postponing reindexing operations for objects.
Running this test from the buildout directory:
bin/test test_textual_doctests -t ActionHandlerPool
Test Setup
Needed Imports:
>>> from bika.lims.workflow import ActionHandlerPool
Testing
Getting an instance of the action handler pool:
>>> pool = ActionHandlerPool.get_instance()
>>> pool
<ActionHandlerPool for UIDs:[]>
When a piece of code is utilizing the utility function doActionFor,
the pool is utilized to increase the performance by
- avoiding the same transition to be multiple times
- postponing the reindexing to the end of the process
For this to work, each calling function needs to call queue_pool()
to postpone (eventual) multiple reindex operation:
This will increase the internal num_calls counter:
If all operations are done by the calling code, it has to call resume(), which
will decrease the counter by 1:
This will decrease the internal num_calls counter:
Multiple calls to resume() should not lead to a negative counter:
>>> for i in range(10):
... pool.resume()
Because the ActionHandlerPool is a singleton, we must ensure that it is thread safe.
This means that concurrent access to this counter must be protected.
To simulate this, we will need to simulate concurrent calls to queue_pool(),
which will add some lag in between the reading and writing operation.
>>> import random
>>> import threading
>>> import time
>>> def simulate_queue_pool(tid):
... pool.queue_pool()
... time.sleep(random.random())
>>> for num in range(100):
... t = threading.Thread(target=simulate_queue_pool, args=(num, ))
... threads.append(t)
... t.start()
>>> for t in threads:
... t.join()
Alphanumber
Tests the Alphanumber object, useful for alphanumeric IDs generation
Running this test from the buildout directory:
bin/test test_textual_doctests -t Alphanumber
Test Setup
Needed Imports:
>>> import re
>>> from bika.lims import api
>>> from senaite.core.idserver.alphanumber import to_decimal
>>> from senaite.core.idserver.alphanumber import Alphanumber
Create and test basic alphanumeric functions:
>>> alpha = Alphanumber(0)
>>> int(alpha)
0
>>> str(alpha)
'AAA000'
>>> repr(alpha)
'AAA000'
>>> format(alpha, "2a2d")
'AA00'
>>> alpha.format("5a4d")
'AAAAA0000'
>>> "{alpha:2a4d}".format(alpha=alpha)
'AA0000'
>>> alpha1 = alpha + 1
>>> int(alpha1)
1
>>> str(alpha1)
'AAA001'
>>> repr(alpha1)
'AAA001'
>>> format(alpha1, "2a2d")
'AA01'
>>> alpha1.format("5a4d")
'AAAAA0001'
>>> "{alpha:2a4d}".format(alpha=alpha1)
'AA0001'
>>> alpha2 = Alphanumber(2674, num_digits=2)
>>> int(alpha2)
2674
>>> str(alpha2)
'ABB01'
Addition of an integer:
>>> alpha3 = alpha2 + 1
>>> int(alpha3)
2675
>>> str(alpha3)
'ABB02'
>>> to_decimal(str(alpha3))
2675
Addition of another Alphanumber object:
>>> alpha3 = alpha2 + alpha1
>>> int(alpha3)
2675
>>> str(alpha3)
'ABB02'
>>> alpha3 = alpha2 + alpha2
>>> int(alpha3)
5348
>>> str(alpha3)
'ACC02'
>>> to_decimal(str(alpha3))
5348
Subtraction of an integer:
>>> alpha3 = alpha2 - 1
>>> int(alpha3)
2673
>>> str(alpha3)
'ABA99'
>>> to_decimal(str(alpha3))
2673
Subtraction of another Alphanumber object:
>>> alpha3 = alpha2 - alpha1
>>> int(alpha3)
2673
>>> str(alpha3)
'ABA99'
>>> alpha3 = alpha2 - alpha2
>>> int(alpha3)
0
>>> str(alpha3)
'AAA00'
>>> to_decimal(str(alpha3))
0
We can also create the instance with a string representing an alpha number:
>>> alpha = Alphanumber("ABB23", num_chars=3, num_digits=2)
>>> str(alpha)
'ABB23'
>>> int(alpha)
2696
>>> to_decimal(str(alpha))
2696
We can even change the number of digits to default (3 digits) and the result
will be formatted accordingly:
>>> alpha = Alphanumber("ABB23")
>>> str(alpha)
'AAC698'
>>> int(alpha)
2696
Or we can do the same, but using another Alphanumber instance as argument:
>>> alpha = Alphanumber(alpha, num_chars=2)
>>> str(alpha)
'AC698'
>>> int(alpha)
2696
We can also use our own alphabet:
>>> alpha = Alphanumber(alpha, alphabet="yu")
>>> str(alpha)
'yuy698'
>>> int(alpha)
2696
>>> to_decimal(str(alpha), alphabet="yu")
2696
And we can add or subtract regardless of alphabet, number of digits and number
of characters:
>>> alpha1 = Alphanumber("ABB23")
>>> int(alpha1)
2696
>>> alpha2 = Alphanumber("yu753", alphabet="yu")
>>> int(alpha2)
1752
>>> alpha3 = alpha1 + alpha2
>>> int(alpha3)
4448
>>> str(alpha3)
'AAE452'
Formatted value must change when a different number of digits is used:
>>> str(alpha3)
'AAE452'
>>> format(alpha3, "2a3d")
'AE452'
>>> format(alpha3, "5a3d")
'AAAAE452'
>>> format(alpha3, "3a2d")
'ABS92'
We can also compare two Alphanumbers:
>>> alpha1 > alpha3
False
>>> alpha4 = Alphanumber(4448)
>>> alpha3 == alpha4
True
Or get the max and the min:
>>> alphas = [alpha1, alpha3, alpha2]
>>> alpha_max = max(alphas)
>>> int(alpha_max)
4448
>>> alpha_min = min(alphas)
>>> int(alpha_min)
1752
We can also convert to int directly:
Or use the lims api:
>>> api.to_int(alpha4)
4448
Analysis Profile
Running this test from the buildout directory:
bin/test test_textual_doctests -t AnalysisProfile
Needed Imports:
>>> import re
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.content.analysisrequest import AnalysisRequest
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.utils import tmpID
>>> from bika.lims.interfaces import ISubmitted
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import getCurrentState
>>> from bika.lims.workflow import getAllowedTransitions
>>> from DateTime import DateTime
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from plone.app.testing import setRoles
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def get_services(sample):
... analyses = sample.getAnalyses(full_objects=True)
... services = map(lambda an: an.getAnalysisService(), analyses)
... return services
>>> def receive_sample(sample):
... do_action_for(sample, "receive")
>>> def submit_analyses(sample):
... for analysis in sample.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def verify_analyses(sample):
... for analysis in sample.getAnalyses(full_objects=True):
... if ISubmitted.providedBy(analysis):
... do_action_for(analysis, "verify")
>>> def retract_analyses(sample):
... for analysis in sample.getAnalyses(full_objects=True):
... if ISubmitted.providedBy(analysis):
... do_action_for(analysis, "retract")
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(setup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(setup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> Zn = api.create(setup.bika_analysisservices, "AnalysisService", title="Zink", Keyword="Zn", Price="20", Category=category.UID())
>>> service_uids1 = [Cu.UID(), Fe.UID(), Au.UID()]
>>> service_uids2 = [Zn.UID()]
>>> service_uids3 = [Cu.UID(), Fe.UID(), Au.UID(), Zn.UID()]
>>> profile1 = api.create(setup.bika_analysisprofiles, "AnalysisProfile", title="Profile", Service=service_uids1)
>>> profile2 = api.create(setup.bika_analysisprofiles, "AnalysisProfile", title="Profile", Service=service_uids2)
>>> profile3 = api.create(setup.bika_analysisprofiles, "AnalysisProfile", title="Profile", Service=service_uids3)
Assign Profile(s)
Assigning Analysis Profiles adds the Analyses of the profile to the sample.
>>> setup.setSelfVerificationEnabled(True)
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
Create some Analysis Requests:
>>> ar1 = create_analysisrequest(client, request, values, [Au.UID()])
>>> ar2 = create_analysisrequest(client, request, values, [Fe.UID()])
>>> ar3 = create_analysisrequest(client, request, values, [Cu.UID()])
Apply the profile object. Note the custom setProfiles (plural) setter:
>>> ar1.setProfiles(profile1)
All analyses from the profile should be added to the sample:
>>> services = get_services(ar1)
>>> set(map(api.get_uid, services)).issuperset(service_uids1)
True
The profile is applied to the sample:
>>> profile1 in ar1.getProfiles()
True
Apply the profile UID:
>>> ar2.setProfiles(profile2.UID())
All analyses from the profile should be added to the sample:
>>> services = get_services(ar2)
>>> set(map(api.get_uid, services)).issuperset(service_uids2)
True
The profile is applied to the sample:
>>> profile2 in ar2.getProfiles()
True
Apply multiple profiles:
>>> ar3.setProfiles([profile1, profile2, profile3.UID()])
All analyses from the profiles should be added to the sample:
>>> services = get_services(ar3)
>>> set(map(api.get_uid, services)).issuperset(service_uids1 + service_uids2 + service_uids3)
True
Remove Profile(s)
Removing an analyis Sample retains the assigned analyses:
>>> analyses = ar1.getAnalyses(full_objects=True)
>>> ar1.setProfiles([])
>>> ar1.getProfiles()
[]
>>> set(ar1.getAnalyses(full_objects=True)) == set(analyses)
True
Assigning Profiles in “to_be_verified” status
>>> ar4 = create_analysisrequest(client, request, values, [Au.UID()])
>>> receive_sample(ar4)
>>> submit_analyses(ar4)
>>> api.get_workflow_status_of(ar4)
'to_be_verified'
Setting the profile works up to this state:
>>> ar4.setProfiles(profile1.UID())
>>> api.get_workflow_status_of(ar4)
'sample_received'
>>> services = get_services(ar3)
>>> set(map(api.get_uid, services)).issuperset(service_uids1 + [Au.UID()])
True
Analysis Request invalidate
Running this test from the buildout directory:
bin/test test_textual_doctests -t AnalysisRequestInvalidate
Test Setup
Needed Imports:
>>> from DateTime import DateTime
>>> from plone import api as ploneapi
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
Variables:
>>> date_now = timestamp()
>>> browser = self.getBrowser()
>>> portal = self.portal
>>> request = self.request
>>> bika_setup = portal.bika_setup
>>> bika_sampletypes = bika_setup.bika_sampletypes
>>> bika_samplepoints = bika_setup.bika_samplepoints
>>> bika_analysiscategories = bika_setup.bika_analysiscategories
>>> bika_analysisservices = bika_setup.bika_analysisservices
>>> bika_labcontacts = bika_setup.bika_labcontacts
>>> bika_storagelocations = bika_setup.bika_storagelocations
>>> bika_samplingdeviations = bika_setup.bika_samplingdeviations
>>> bika_sampleconditions = bika_setup.bika_sampleconditions
>>> portal_url = portal.absolute_url()
>>> bika_setup_url = portal_url + "/bika_setup"
Test user:
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager', 'LabManager',])
Create Analysis Requests (AR)
An AnalysisRequest can only be created inside a Client:
>>> clients = self.portal.clients
>>> client = api.create(clients, "Client", Name="NARALABS", ClientID="JG")
>>> client
<Client at /plone/clients/client-1>
To create a new AR, a Contact is needed:
>>> contact = api.create(client, "Contact", Firstname="Juan", Surname="Gallostra")
>>> contact
<Contact at /plone/clients/client-1/contact-1>
A SampleType defines how long the sample can be retained, the minimum volume
needed, if it is hazardous or not, the point where the sample was taken etc.:
>>> sampletype = api.create(bika_sampletypes, "SampleType", Prefix="water", MinimumVolume="100 ml")
>>> sampletype
<SampleType at /plone/bika_setup/bika_sampletypes/sampletype-1>
A SamplePoint defines the location, where a Sample was taken:
>>> samplepoint = api.create(bika_samplepoints, "SamplePoint", title="Lake of Constance")
>>> samplepoint
<SamplePoint at /plone/bika_setup/bika_samplepoints/samplepoint-1>
An AnalysisCategory categorizes different AnalysisServices:
>>> analysiscategory = api.create(bika_analysiscategories, "AnalysisCategory", title="Water")
>>> analysiscategory
<AnalysisCategory at /plone/bika_setup/bika_analysiscategories/analysiscategory-1>
An AnalysisService defines a analysis service offered by the laboratory:
>>> analysisservice = api.create(bika_analysisservices, "AnalysisService", title="PH", ShortTitle="ph", Category=analysiscategory, Keyword="PH")
>>> analysisservice
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>
Finally, the AnalysisRequest can be created:
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': date_now,
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID(),
... 'Priority': '1',
... }
>>> service_uids = [analysisservice.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar
<AnalysisRequest at /plone/clients/client-1/water-0001>
Also, make sure that the Analysis Request only has one analysis. You will
see why later:
>>> len(ar.getAnalyses())
1
Submit Analyses results for the current Analysis Request
First transition the Analysis Request to received:
>>> transitioned = do_action_for(ar, 'receive')
>>> transitioned[0]
True
>>> api.get_workflow_status_of(ar)
'sample_received'
Set the results of the Analysis and transition them for verification:
>>> for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult('12')
... transitioned = do_action_for(analysis, 'submit')
>>> transitioned[0]
True
Check that both the Analysis Request and its analyses have been transitioned
to ‘to_be_verified’:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> not_to_be_verified = 0
>>> for analysis in ar.getAnalyses(full_objects=True):
... if api.get_workflow_status_of(analysis) != 'to_be_verified':
... not_to_be_verified += 1
>>> not_to_be_verified
0
Verify Analyses results for the current Analysis Request
Same user cannot verify by default:
>>> ar.bika_setup.setSelfVerificationEnabled(True)
Select all analyses from the Analysis Request and verify them:
>>> for analysis in ar.getAnalyses(full_objects=True):
... transitioned = do_action_for(analysis, 'verify')
>>> transitioned[0]
True
Check that both the Analysis Request analyses have been transitioned to
verified:
>>> api.get_workflow_status_of(ar)
'verified'
>>> not_verified = 0
>>> for analysis in ar.getAnalyses(full_objects=True):
... if api.get_workflow_status_of(analysis) != 'verified':
... not_verified += 1
>>> not_verified
0
Invalidate the Analysis Request
When an Analysis Request is invalidated two things should happen:
1- The Analysis Request is transitioned to ‘invalid’. Analyses remain in
verified state.
2- A new Analysis Request (retest) is created automatically, with same
analyses as the invalidated, but in sample_received state.
Invalidate the Analysis Request:
>>> transitioned = do_action_for(ar, 'invalidate')
>>> transitioned[0]
True
>>> api.get_workflow_status_of(ar)
'invalid'
>>> ar.isInvalid()
True
Verify a new Analysis Request (retest) has been created, with same analyses as
the invalidated:
>>> retest = ar.getRetest()
>>> retest
<AnalysisRequest at /plone/clients/client-1/water-0001-R01>
>>> retest.getInvalidated()
<AnalysisRequest at /plone/clients/client-1/water-0001>
>>> api.get_workflow_status_of(retest)
'sample_received'
>>> retest_ans = map(lambda an: an.getKeyword(), retest.getAnalyses(full_objects=True))
>>> invalid_ans = map(lambda an: an.getKeyword(), ar.getAnalyses(full_objects=True))
>>> len(set(retest_ans)-set(invalid_ans))
0
Invalidate the retest
We can even invalidate the retest generated previously. As a result, a new
retest will be created.
First, submit all analyses from the retest:
>>> for analysis in retest.getAnalyses(full_objects=True):
... analysis.setResult(12)
... transitioned = do_action_for(analysis, 'submit')
>>> transitioned[0]
True
>>> api.get_workflow_status_of(retest)
'to_be_verified'
Now, verify all analyses from the retest:
>>> for analysis in retest.getAnalyses(full_objects=True):
... transitioned = do_action_for(analysis, 'verify')
>>> transitioned[0]
True
>>> not_verified = 0
>>> for analysis in retest.getAnalyses(full_objects=True):
... if api.get_workflow_status_of(analysis) != 'verified':
... not_verified += 1
>>> not_verified
0
>>> api.get_workflow_status_of(retest)
'verified'
Invalidate the Retest:
>>> transitioned = do_action_for(retest, 'invalidate')
>>> transitioned[0]
True
>>> api.get_workflow_status_of(retest)
'invalid'
>>> retest.isInvalid()
True
Verify a new Analysis Request (retest 2) has been created, with same analyses
as the invalidated (retest):
>>> retest2 = retest.getRetest()
>>> retest2
<AnalysisRequest at /plone/clients/client-1/water-0001-R02>
>>> retest2.getInvalidated()
<AnalysisRequest at /plone/clients/client-1/water-0001-R01>
>>> retest2.getInvalidated().getInvalidated()
<AnalysisRequest at /plone/clients/client-1/water-0001>
>>> api.get_workflow_status_of(retest2)
'sample_received'
>>> not_registered = 0
>>> for analysis in retest2.getAnalyses(full_objects=True):
... if api.get_workflow_status_of(analysis) != 'unassigned':
... registered += 1
>>> not_registered
0
>>> retest_ans = map(lambda an: an.getKeyword(), retest2.getAnalyses(full_objects=True))
>>> invalid_ans = map(lambda an: an.getKeyword(), retest.getAnalyses(full_objects=True))
>>> len(set(retest_ans)-set(invalid_ans))
0
Analysis Request retract
Running this test from the buildout directory:
bin/test test_textual_doctests -t AnalysisRequestRetract
Test Setup
Needed Imports:
>>> import transaction
>>> from DateTime import DateTime
>>> from plone import api as ploneapi
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
Variables:
>>> date_now = timestamp()
>>> portal = self.portal
>>> request = self.request
>>> bika_setup = portal.bika_setup
>>> bika_sampletypes = bika_setup.bika_sampletypes
>>> bika_samplepoints = bika_setup.bika_samplepoints
>>> bika_analysiscategories = bika_setup.bika_analysiscategories
>>> bika_analysisservices = bika_setup.bika_analysisservices
>>> bika_labcontacts = bika_setup.bika_labcontacts
>>> bika_storagelocations = bika_setup.bika_storagelocations
>>> bika_samplingdeviations = bika_setup.bika_samplingdeviations
>>> bika_sampleconditions = bika_setup.bika_sampleconditions
>>> portal_url = portal.absolute_url()
>>> bika_setup_url = portal_url + "/bika_setup"
Test user:
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Create Analysis Requests (AR)
An AnalysisRequest can only be created inside a Client:
>>> clients = self.portal.clients
>>> client = api.create(clients, "Client", Name="NARALABS", ClientID="JG")
>>> client
<Client at /plone/clients/client-1>
To create a new AR, a Contact is needed:
>>> contact = api.create(client, "Contact", Firstname="Juan", Surname="Gallostra")
>>> contact
<Contact at /plone/clients/client-1/contact-1>
A SampleType defines how long the sample can be retained, the minimum volume
needed, if it is hazardous or not, the point where the sample was taken etc.:
>>> sampletype = api.create(bika_sampletypes, "SampleType", Prefix="water", MinimumVolume="100 ml")
>>> sampletype
<SampleType at /plone/bika_setup/bika_sampletypes/sampletype-1>
A SamplePoint defines the location, where a Sample was taken:
>>> samplepoint = api.create(bika_samplepoints, "SamplePoint", title="Lake of Constance")
>>> samplepoint
<SamplePoint at /plone/bika_setup/bika_samplepoints/samplepoint-1>
An AnalysisCategory categorizes different AnalysisServices:
>>> analysiscategory = api.create(bika_analysiscategories, "AnalysisCategory", title="Water")
>>> analysiscategory
<AnalysisCategory at /plone/bika_setup/bika_analysiscategories/analysiscategory-1>
An AnalysisService defines a analysis service offered by the laboratory:
>>> analysisservice = api.create(bika_analysisservices, "AnalysisService", title="PH", ShortTitle="ph", Category=analysiscategory, Keyword="PH")
>>> analysisservice
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>
Finally, the AnalysisRequest can be created:
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': date_now,
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID(),
... 'Priority': '1',
... }
>>> service_uids = [analysisservice.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar
<AnalysisRequest at /plone/clients/client-1/water-0001>
Also, make sure that the Analysis Request only has one analysis. You will
see why later:
>>> len(ar.getAnalyses())
1
Submit Analyses results for the current Analysis Request
First transition the Analysis Request to received:
>>> transitioned = do_action_for(ar, 'receive')
>>> transitioned[0]
True
>>> ar.portal_workflow.getInfoFor(ar, 'review_state')
'sample_received'
Set the results of the Analysis and transition them for verification:
>>> for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult('12')
... transitioned = do_action_for(analysis, 'submit')
>>> transitioned[0]
True
Check that both the Analysis Request and its analyses have been transitioned
to ‘to_be_verified’:
>>> ar.portal_workflow.getInfoFor(ar, 'review_state')
'to_be_verified'
>>> not_to_be_verified = 0
>>> for analysis in ar.getAnalyses(full_objects=True):
... if analysis.portal_workflow.getInfoFor(analysis, 'review_state') != 'to_be_verified':
... not_to_be_verified += 1
>>> not_to_be_verified
0
Retract the Analysis Request
When an Analysis Request is retracted two things should happen:
1- The Analysis Request is transitioned to ‘sample_received’. Since
the results have been retracted its review state goes back to just
before the submission of results.
2- Its current analyses are transitioned to ‘retracted’ and a duplicate
of each analysis is created (so that results can be introduced again) with
review state ‘sample_received’.
Retract the Analysis Request:
>>> transitioned = do_action_for(ar, 'retract')
>>> transitioned[0]
True
>>> ar.portal_workflow.getInfoFor(ar, 'review_state')
'sample_received'
Verify that its analyses have also been retracted and that a new analysis has been
created with review status ‘unassigned’. Since we previously checked that the AR
had only one analyses the count for both ‘retracted’ and ‘unassigned’ analyses
should be one:
>>> registered = 0
>>> retracted = 0
>>> for analysis in ar.getAnalyses(full_objects=True):
... if analysis.portal_workflow.getInfoFor(analysis, 'review_state') == 'retracted':
... retracted += 1
... if analysis.portal_workflow.getInfoFor(analysis, 'review_state') != 'unassigned':
... registered += 1
>>> registered
1
>>> retracted
1
Analysis Requests
Analysis Requests in Bika LIMS describe an Analysis Order from a Client to the
Laboratory. Each Analysis Request manages a Sample, which holds the data of the
physical Sample from the Client. The Sample is currently not handled by its own
in Bika LIMS. So the managing Analysis Request is the primary interface from the
User (Client) to the Sample.
Running this test from the buildout directory:
bin/test test_textual_doctests -t AnalysisRequests
Test Setup
Needed Imports:
>>> import transaction
>>> from DateTime import DateTime
>>> from plone import api as ploneapi
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.api import do_transition_for
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
Variables:
>>> date_now = timestamp()
>>> portal = self.portal
>>> request = self.request
>>> bika_setup = portal.bika_setup
>>> bika_sampletypes = bika_setup.bika_sampletypes
>>> bika_samplepoints = bika_setup.bika_samplepoints
>>> bika_analysiscategories = bika_setup.bika_analysiscategories
>>> bika_analysisservices = bika_setup.bika_analysisservices
>>> bika_labcontacts = bika_setup.bika_labcontacts
>>> bika_storagelocations = bika_setup.bika_storagelocations
>>> bika_samplingdeviations = bika_setup.bika_samplingdeviations
>>> bika_sampleconditions = bika_setup.bika_sampleconditions
>>> portal_url = portal.absolute_url()
>>> bika_setup_url = portal_url + "/bika_setup"
Test user:
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Analysis Requests (AR)
An AnalysisRequest can only be created inside a Client:
>>> clients = self.portal.clients
>>> client = api.create(clients, "Client", Name="RIDING BYTES", ClientID="RB")
>>> client
<Client at /plone/clients/client-1>
To create a new AR, a Contact is needed:
>>> contact = api.create(client, "Contact", Firstname="Ramon", Surname="Bartl")
>>> contact
<Contact at /plone/clients/client-1/contact-1>
A SampleType defines how long the sample can be retained, the minimum volume
needed, if it is hazardous or not, the point where the sample was taken etc.:
>>> sampletype = api.create(bika_sampletypes, "SampleType", Prefix="water", MinimumVolume="100 ml")
>>> sampletype
<SampleType at /plone/bika_setup/bika_sampletypes/sampletype-1>
A SamplePoint defines the location, where a Sample was taken:
>>> samplepoint = api.create(bika_samplepoints, "SamplePoint", title="Lake of Constance")
>>> samplepoint
<SamplePoint at /plone/bika_setup/bika_samplepoints/samplepoint-1>
An AnalysisCategory categorizes different AnalysisServices:
>>> analysiscategory = api.create(bika_analysiscategories, "AnalysisCategory", title="Water")
>>> analysiscategory
<AnalysisCategory at /plone/bika_setup/bika_analysiscategories/analysiscategory-1>
An AnalysisService defines a analysis service offered by the laboratory:
>>> analysisservice = api.create(bika_analysisservices, "AnalysisService", title="PH", ShortTitle="ph", Category=analysiscategory, Keyword="PH")
>>> analysisservice
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>
Finally, the AnalysisRequest can be created:
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': date_now,
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID(),
... 'Priority': '1',
... }
>>> service_uids = [analysisservice.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar
<AnalysisRequest at /plone/clients/client-1/water-0001>
>>> ar.getPriority()
'1'
>>> ar.getPriorityText()
u'Highest'
DateReceived field should be editable in Received state
For this we need an AR with more than one Analysis:
>>> from bika.lims.adapters.widgetvisibility import DateReceivedFieldVisibility
>>> from bika.lims.workflow import doActionFor
>>> as2 = api.create(bika_analysisservices, 'AnalysisService', title='Another Type Of Analysis', ShortTitle='Another', Category=analysiscategory, Keyword='AN')
>>> ar1 = create_analysisrequest(client, request, values, service_uids + [as2.UID()])
In states earlier than sample_received the DateReceived field is uneditable:
>>> field = ar1.getField("DateReceived")
>>> field.checkPermission("edit", ar1) and True or False
False
In the sample_received state however, it is possible to modify the field. In this case
the SampleDateReceived adapter also simply passes the schema default unmolested.
>>> p = api.do_transition_for(ar1, 'receive')
>>> field = ar1.getField("DateReceived")
>>> field.checkPermission("edit", ar1) and True or False
True
After any analysis has been submitted, the field is no longer editable. The adapter
sets the widget.visible to ‘invisible’.
>>> an = ar1.getAnalyses(full_objects=True)[0]
>>> an.setResult('1')
>>> p = doActionFor(an, 'submit')
>>> DateReceivedFieldVisibility(ar1)(ar1, 'edit', ar1.schema['DateReceived'], 'default')
'invisible'
Analysis Service - Activations and Inactivations
The inactivation and activation of Analysis Services relies on senaite_deactivable_type_workflow.
To prevent inconsistencies that could have undesired effects, an Analysis Service
can only be deactivated if it does not have active dependents (this is, other
services that depends on the Analysis Service to calculate their results).
Following the same reasoning, an Analysis Service can only be activated if does
not have any calculation assigned or if does, the calculation is active, as well
as its dependencies (this is, other services the Analysis Service depends on to
calculate its result) are active .
Test Setup
Running this test from the buildout directory:
bin/test -t AnalysisServiceInactivation
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.workflow import doActionFor
>>> from bika.lims.workflow import getAllowedTransitions
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from plone.app.testing import setRoles
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> bika_analysiscategories = bikasetup.bika_analysiscategories
>>> bika_analysisservices = bikasetup.bika_analysisservices
>>> bika_calculations = bikasetup.bika_calculations
>>> bika_suppliers = bikasetup.bika_suppliers
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bika_suppliers, "Supplier", Name="Naralabs")
>>> Ca = api.create(bika_analysisservices, "AnalysisService", title="Calcium", Keyword="Ca", Price="15", Category=category.UID())
>>> Mg = api.create(bika_analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg", Price="10", Category=category.UID())
>>> Au = api.create(bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Deactivation of Analysis Service
All services can be deactivated:
>>> getAllowedTransitions(Ca)
['deactivate']
>>> getAllowedTransitions(Mg)
['deactivate']
>>> getAllowedTransitions(Au)
['deactivate']
But if we create a new Analysis Service with a calculation that depends on them:
>>> calc = api.create(bika_calculations, "Calculation", title="Total Hardness")
>>> calc.setFormula("[Ca] + [Mg]")
>>> hardness = api.create(bika_analysisservices, "AnalysisService", title="Total Hardness", Keyword="TotalHardness")
>>> hardness.setCalculation(calc)
Then, only Au can be deactivated, cause harndess is active and depends on
Ca and Mg:
>>> getAllowedTransitions(Ca)
[]
>>> getAllowedTransitions(Mg)
[]
>>> getAllowedTransitions(Au)
['deactivate']
>>> getAllowedTransitions(hardness)
['deactivate']
If we deactivate Hardness:
>>> performed = doActionFor(hardness, 'deactivate')
>>> api.is_active(hardness)
False
>>> getAllowedTransitions(hardness)
['activate']
Then we will be able to deactivate both Ca and Mg:
>>> getAllowedTransitions(Ca)
['deactivate']
>>> getAllowedTransitions(Mg)
['deactivate']
Activation of Analysis Service
Deactivate the Analysis Service Ca:
>>> performed = doActionFor(Ca, 'deactivate')
>>> api.is_active(Ca)
False
>>> getAllowedTransitions(Ca)
['activate']
And now, we cannot activate Hardness, cause one of its dependencies (Ca) is
not active:
>>> api.is_active(hardness)
False
>>> getAllowedTransitions(hardness)
[]
But if we activate Ca again:
>>> performed = doActionFor(Ca, 'activate')
>>> api.is_active(Ca)
True
Hardness can be activated again:
>>> getAllowedTransitions(hardness)
['activate']
>>> performed = doActionFor(hardness, 'activate')
>>> api.is_active(hardness)
True
Analysis Turnaround Time
Running this test from the buildout directory:
bin/test test_textual_doctests -t AnalysisTurnaroundTime
Test Setup
Needed Imports:
>>> import re
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.api.analysis import get_formatted_interval
>>> from bika.lims.api.analysis import is_out_of_range
>>> from bika.lims.content.analysisrequest import AnalysisRequest
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.utils import tmpID
>>> from bika.lims.workflow import doActionFor
>>> from bika.lims.workflow import getCurrentState
>>> from bika.lims.workflow import getAllowedTransitions
>>> from DateTime import DateTime
>>> from datetime import timedelta
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from plone.app.testing import setRoles
>>> from Products.ATContentTypes.utils import DT2dt, dt2DT
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def change_receive_date(ar, days):
... prev_date = ar.getDateReceived()
... ar.Schema().getField('DateReceived').set(ar, prev_date + days)
... for analysis in ar.getAnalyses(full_objects=True):
... an_created = analysis.created()
... analysis.getField('creation_date').set(analysis, an_created + days)
>>> def compute_due_date(analysis):
... start = DT2dt(analysis.getStartProcessDate())
... tat = api.to_minutes(**analysis.getMaxTimeAllowed())
... due_date = start + timedelta(minutes=tat)
... return dt2DT(due_date)
>>> def compute_duration(date_from, date_to):
... return (date_to - date_from) * 24 * 60
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), DuplicateVariation="0.5")
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID(), DuplicateVariation="0.5")
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID(), DuplicateVariation="0.5")
>>> Mg = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg", Price="20", Category=category.UID(), DuplicateVariation="0.5")
>>> service_uids = [api.get_uid(an) for an in [Cu, Fe, Au, Mg]]
>>> sampletype_uid = api.get_uid(sampletype)
Set different Turnaround Times for every single Analysis Service:
>>> Au.setMaxTimeAllowed(dict(days=2, hours=8, minutes=30))
>>> maxtime = Au.getMaxTimeAllowed()
>>> [maxtime.get("days"), maxtime.get("hours"), maxtime.get("minutes")]
[2, 8, 30]
>>> Cu.setMaxTimeAllowed(dict(days=1, hours=4, minutes=0))
>>> maxtime = Cu.getMaxTimeAllowed()
>>> [maxtime.get("days"), maxtime.get("hours"), maxtime.get("minutes")]
[1, 4, 0]
>>> Fe.setMaxTimeAllowed(dict(days=3, hours=0, minutes=0))
>>> maxtime = Fe.getMaxTimeAllowed()
>>> [maxtime.get("days"), maxtime.get("hours"), maxtime.get("minutes")]
[3, 0, 0]
And leave Magnesium (Mg) without any Turnaround Time set, so it will use the
default Turnaround time set in setup:
>>> maxtime = bikasetup.getDefaultTurnaroundTime()
>>> [maxtime.get("days"), maxtime.get("hours"), maxtime.get("minutes")]
[5, 0, 0]
>>> maxtime = Mg.getMaxTimeAllowed()
>>> [maxtime.get("days"), maxtime.get("hours"), maxtime.get("minutes")]
[5, 0, 0]
Create an Analysis Request:
>>> values = {
... 'Client': api.get_uid(client),
... 'Contact': api.get_uid(contact),
... 'DateSampled': date_now,
... 'SampleType': sampletype_uid,
... 'Priority': '1',
... }
>>> ar = create_analysisrequest(client, request, values, service_uids)
Get the Analyses for further use:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> analyses = sorted(analyses, key=lambda an: an.getKeyword())
>>> map(lambda an: an.getKeyword(), analyses)
['Au', 'Cu', 'Fe', 'Mg']
>>> analyses_dict = {an.getKeyword(): an for an in analyses}
Test TAT with analyses received 2d ago
We manually force a receive date 2d before so we can test:
>>> new_received = map(lambda rec: rec-2, received)
>>> change_receive_date(ar, -2)
>>> received = map(lambda an: an.getDateReceived(), analyses)
>>> start_process = map(lambda an: an.getStartProcessDate(), analyses)
>>> new_received == received == start_process
True
Analyses Au and Fe are not late, but Cu is late:
>>> map(lambda an: an.isLateAnalysis(), analyses)
[False, True, False, False]
Check Due Dates:
>>> expected_due_dates = map(lambda an: compute_due_date(an), analyses)
>>> due_dates = map(lambda an: an.getDueDate(), analyses)
>>> due_dates == expected_due_dates
True
And duration:
>>> expected = map(lambda an: int(compute_duration(an.getStartProcessDate(), DateTime())), analyses)
>>> durations = map(lambda an: int(an.getDuration()), analyses)
>>> expected == durations
True
Earliness in minutes. Note the value for Cu is negative (is late), and the value
for Mg is 0 (no Turnaround Time) set:
>>> map(lambda an: int(round(an.getEarliness())), analyses)
[510, -1200, 1440, 4320]
Lateness in minutes. Note that all values are negative except for Cu:
>>> map(lambda an: int(round(an.getLateness())), analyses)
[-510, 1200, -1440, -4320]
Because one of the analyses (Cu) is late, the Analysis Request is late too:
Batch creation and Client assignment
Running this test from the buildout directory:
bin/test test_textual_doctests -t BatchClientAssignment
Test Setup
Needed Imports:
>>> from bika.lims import api
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from zope.lifecycleevent import modified
Variables and basic objects for the test:
>>> portal = self.portal
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
Batch creation and Client assignment
Create a new Batch:
>>> batches = portal.batches
>>> batch = api.create(batches, "Batch", title="Test batch")
>>> batch.aq_parent
<BatchFolder at /plone/batches>
The batches folder contains the batch, while Client folder remains empty:
>>> len(batches.objectValues("Batch"))
1
>>> len(client.objectValues("Batch"))
0
Assign a client to the Batch and the latter is automatically moved inside
Client’s folder:
>>> batch.setClient(client)
>>> modified(batch)
>>> len(batches.objectValues("Batch"))
0
>>> len(client.objectValues("Batch"))
1
If the client is assigned on creation, same behavior as before:
>>> batch = api.create(portal.batches, "Batch", Client=client)
>>> len(batches.objectValues("Batch"))
0
>>> len(client.objectValues("Batch"))
2
Calculations
Bika LIMS can dynamically calculate a value based on the results of several
Analyses with a formula.
Running this test from the buildout directory:
bin/test test_textual_doctests -t Calculations
Test Setup
Needed Imports:
>>> import transaction
>>> from operator import methodcaller
>>> from plone import api as ploneapi
>>> from bika.lims import api
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bika_setup = portal.bika_setup
>>> bika_calculations = bika_setup.bika_calculations
>>> bika_analysisservices = bika_setup.bika_analysisservices
Test user:
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Calculation
Calculations are created in the bika_setup/bika_calculations folder. They
offer a Formula field, where keywords from Analyses can be used to calculate a
result.
Each AnalysisService contains a Keyword field, which can be referenced in a formula:
>>> as1 = api.create(bika_analysisservices, "AnalysisService", title="Calcium")
>>> as1.setKeyword("Ca")
>>> as1.reindexObject()
>>> as2 = api.create(bika_analysisservices, "AnalysisService", title="Magnesium")
>>> as2.setKeyword("Mg")
>>> as2.reindexObject()
Create one Calculation:
>>> calc = api.create(bika_calculations, "Calculation", title="Total Hardness")
The Formula field references the Keywords from Analysis Services:
>>> calc.setFormula("[Ca] + [Mg]")
>>> calc.getFormula()
'[Ca] + [Mg]'
>>> calc.getMinifiedFormula()
'[Ca] + [Mg]'
The Calculation depends now on the two Analysis Services:
>>> sorted(calc.getCalculationDependencies(flat=True), key=methodcaller('getId'))
[<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>, <AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-2>]
It is also possible to find out if an AnalysisService depends on the calculation:
>>> as1.setCalculation(calc)
>>> calc.getCalculationDependants()
[<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>]
Or to find out which services have selected a particular calculation as their
primary Calculation field’s value:
>>> from bika.lims.browser.fields.uidreferencefield import get_backreferences
>>> get_backreferences(calc, 'AnalysisServiceCalculation')
['...']
The Formula can be tested with dummy values in the TestParameters field:
>>> form_value = [{"keyword": "Ca", "value": 5.6}, {"keyword": "Mg", "value": 3.3},]
>>> calc.setTestParameters(form_value)
>>> calc.setTestResult(form_value)
>>> calc.getTestResult()
'8.9'
Within a Calculation it is also possible to use a Python function to calculate
a result. The user can add a Python module as a dotted name and a member
function in the PythonImports field:
>>> calc.setPythonImports([{'module': 'math', 'function': 'floor'}])
>>> calc.setFormula("floor([Ca] + [Mg])")
>>> calc.getFormula()
'floor([Ca] + [Mg])'
>>> calc.setTestResult(form_value)
>>> calc.getTestResult()
'8.0'
A Calculation can therefore dynamically get a module and a member:
>>> calc._getModuleMember('math', 'ceil')
<built-in function ceil>
Cobas Integra 400+ import interface
Running this test from the buildout directory:
bin/test test_textual_doctests -t CobasIntegra400plusImportInterface
Test Setup
Needed imports:
>>> import os
>>> import transaction
>>> from Products.CMFCore.utils import getToolByName
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from DateTime import DateTime
>>> import codecs
>>> from senaite.core.exportimport import instruments
>>> from senaite.core.exportimport.instruments.cobasintegra.model_400_plus.model_400_plus import CobasIntegra400plus2Importer
>>> from senaite.core.exportimport.instruments.cobasintegra.model_400_plus.model_400_plus import CobasIntegra400plus2CSVParser
>>> from senaite.core.exportimport.auto_import_results import UploadFileWrapper
Functional helpers:
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
Variables:
>>> date_now = timestamp()
>>> portal = self.portal
>>> request = self.request
>>> bika_setup = portal.bika_setup
>>> bika_instruments = bika_setup.bika_instruments
>>> bika_sampletypes = bika_setup.bika_sampletypes
>>> bika_samplepoints = bika_setup.bika_samplepoints
>>> bika_analysiscategories = bika_setup.bika_analysiscategories
>>> bika_analysisservices = bika_setup.bika_analysisservices
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager:
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Availability of instrument interface
- Check that the instrument interface is available::
>>> exims = []
>>> for exim_id in instruments.__all__:
... exims.append(exim_id)
>>> 'cobasintegra.model_400_plus.model_400_plus' in exims
True
Assigning the Import Interface to an Instrument
Create an Instrument and assign to it the tested Import Interface:
>>> instrument = api.create(bika_instruments, "Instrument", title="Instrument-1")
>>> instrument
<Instrument at /plone/bika_setup/bika_instruments/instrument-1>
>>> instrument.setImportDataInterface(['cobasintegra.model_400_plus.model_400_plus'])
>>> instrument.getImportDataInterface()
['cobasintegra.model_400_plus.model_400_plus']
Import test
Required steps: Create and receive Analysis Request for import test
An AnalysisRequest can only be created inside a Client, and it also requires a Contact and
a SampleType:
>>> clients = self.portal.clients
>>> client = api.create(clients, "Client", Name="BHPLAB", ClientID="BLAB")
>>> client
<Client at /plone/clients/client-1>
>>> contact = api.create(client, "Contact", Firstname="Moffat", Surname="More")
>>> contact
<Contact at /plone/clients/client-1/contact-1>
>>> sampletype = api.create(bika_sampletypes, "SampleType", Prefix="H2O", MinimumVolume="100 ml")
>>> sampletype
<SampleType at /plone/bika_setup/bika_sampletypes/sampletype-1>
Create an AnalysisCategory (which categorizes different AnalysisServices), and add to it some
of the AnalysisServices that are found in the results file:
>>> analysiscategory = api.create(bika_analysiscategories, "AnalysisCategory", title="Water")
>>> analysiscategory
<AnalysisCategory at /plone/bika_setup/bika_analysiscategories/analysiscategory-1>
>>> analysisservice_1 = api.create(bika_analysisservices,
... "AnalysisService",
... title="WBC",
... ShortTitle="wbc",
... Category=analysiscategory,
... Keyword="WBC")
>>> analysisservice_1
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>
>>> analysisservice_2 = api.create(bika_analysisservices,
... "AnalysisService",
... title="RBC",
... ShortTitle="rbc",
... Category=analysiscategory,
... Keyword="RBC")
>>> analysisservice_2
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-2>
>>> analysisservice_3 = api.create(bika_analysisservices,
... "AnalysisService",
... title="HGB",
... ShortTitle="hgb",
... Category=analysiscategory,
... Keyword="HGB")
>>> analysisservice_3
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-3>
>>> analysisservice_4 = api.create(bika_analysisservices,
... "AnalysisService",
... title="HCT",
... ShortTitle="hct",
... Category=analysiscategory,
... Keyword="HCT")
>>> analysisservice_4
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-4>
>>> analysisservices = [analysisservice_1, analysisservice_2, analysisservice_3, analysisservice_4]
Create an AnalysisRequest with this AnalysisService and receive it:
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': date_now,
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()
... }
>>> service_uids = [analysisservice.UID() for analysisservice in analysisservices]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar
<AnalysisRequest at /plone/clients/client-1/H2O-0001>
>>> ar.getReceivedBy()
''
>>> wf = getToolByName(ar, 'portal_workflow')
>>> wf.doActionFor(ar, 'receive')
>>> ar.getReceivedBy()
'test_user_1_'
Import test
Load results test file and import the results:
>>> dir_path = os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', 'files'))
>>> temp_file = codecs.open(dir_path + '/cobasintegra.csv',
... encoding='utf-8-sig')
>>> test_file = UploadFileWrapper(temp_file)
>>> cobasintegra_parser = CobasIntegra400plus2CSVParser(test_file)
>>> importer = CobasIntegra400plus2Importer(parser=cobasintegra_parser,
... context=portal,
... allowed_ar_states=['sample_received', 'to_be_verified'],
... allowed_analysis_states=None,
... override=[True, True])
>>> importer.process()
Check from the importer logs that the file from where the results have been imported is indeed
the specified file:
>>> 'cobasintegra.csv' in importer.logs[0]
True
Check the rest of the importer logs to verify that the values were correctly imported:
>>> importer.logs[1:]
['End of file reached successfully: 25 objects, 8 analyses, 112 results'...
Test Setup
>>> import transaction
>>> from plone import api as ploneapi
>>> from zope.lifecycleevent import modified
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from bika.lims.api import create
>>> portal = self.portal
>>> portal_url = portal.absolute_url()
>>> bika_setup = portal.bika_setup
>>> bika_setup_url = portal_url + "/bika_setup"
>>> browser = self.getBrowser()
>>> setRoles(portal, TEST_USER_ID, ['LabManager', 'Manager', 'Owner'])
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def login(user=TEST_USER_ID, password=TEST_USER_PASSWORD):
... browser.open(portal_url + "/login_form")
... browser.getControl(name='__ac_name').value = user
... browser.getControl(name='__ac_password').value = password
... browser.getControl(name='buttons.login').click()
... assert("__ac_password" not in browser.contents)
... return ploneapi.user.get_current()
>>> def logout():
... browser.open(portal_url + "/logout")
... assert("You are now logged out" in browser.contents)
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
>>> def get_workflows_for(context):
... # Returns a tuple of assigned workflows for the given context
... workflow = ploneapi.portal.get_tool("portal_workflow")
... return workflow.getChainFor(context)
>>> def get_workflow_status_of(context):
... # Returns the workflow status of the given context
... return ploneapi.content.get_state(context)
Client
A client lives in the /clients folder:
>>> clients = portal.clients
>>> client1 = create(clients, "Client", title="Client-1", ClientID="ClientID1")
>>> client2 = create(clients, "Client", title="Client-2", ClientID="ClientID2")
User
A user is able to login to the system.
Create a new user for the contact:
>>> user1 = ploneapi.user.create(email="contact-1@example.com", username="user-1", password=TEST_USER_PASSWORD, properties=dict(fullname="Test User 1"))
>>> user2 = ploneapi.user.create(email="contact-2@example.com", username="user-2", password=TEST_USER_PASSWORD, properties=dict(fullname="Test User 2"))
>>> transaction.commit()
Client Browser Test
Login with the first user:
>>> user = login(user1.id)
The user is not allowed to access any clients folder:
>>> browser.open(client1.absolute_url())
Traceback (most recent call last):
...
Unauthorized: ...
Linking the user to a client contact grants access to this client:
>>> contact1.setUser(user1)
True
>>> transaction.commit()
Linking a user adds this user to the Client group:
>>> client_group = client1.get_group()
>>> user1.getId() in client_group.getAllGroupMemberIds()
True
This gives the user the global Client role:
>>> sorted(ploneapi.user.get_roles(user=user1))
['Authenticated', 'Client', 'Member']
It also grants local Owner role on the client object:
>>> sorted(ploneapi.user.get_roles(user=user1, obj=client1))
['Authenticated', 'Client', 'Member', 'Owner']
The user is able to modify the client object properties:
>>> browser.open(client1.absolute_url() + "/base_edit")
>>> "edit_form" in browser.contents
True
As well as the contact object properties:
>>> browser.open(contact1.absolute_url() + "/base_edit")
>>> "edit_form" in browser.contents
True
But the user can not access other clients:
>>> browser.open(client2.absolute_url())
Traceback (most recent call last):
...
Unauthorized: ...
Or modify other clients:
>>> browser.open(client2.absolute_url() + "/base_edit")
Traceback (most recent call last):
...
Unauthorized: ...
Unlink the user revokes all access to the client:
>>> contact1.unlinkUser()
True
>>> transaction.commit()
The user has no local owner role anymore on the client:
>>> sorted(ploneapi.user.get_roles(user=user1, obj=client1))
['Authenticated', 'Member']
>>> browser.open(client1.absolute_url())
Traceback (most recent call last):
...
Unauthorized: ...
Duplicate results range
The valid result range for a duplicate analysis is calculated by applying a
duplicate variation percentage to the result from the original analysis. If the
analysis has result options enabled or string results enabled, results from
both duplicate and original analysis must match 100%.
Running this test from the buildout directory:
bin/test test_textual_doctests -t DuplicateResultsRange
Test Setup
Needed imports:
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from bika.lims import api
>>> from bika.lims.api.analysis import is_out_of_range
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
Functional Helpers:
>>> def new_sample(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': DateTime().strftime("%Y-%m-%d"),
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def new_worksheet(analyses):
... analyses = []
... for num in range(num_analyses):
... sample = new_sample(analyses)
... analyses.extend(sample.getAnalyses(full_objects=True))
... worksheet = api.create(portal.worksheets, "Worksheet")
... worksheet.addAnalyses(analyses)
... return worksheet
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = api.get_setup()
Create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID())
>>> Fe = api.create(setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(setup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Duplicate of an analysis with numeric result
Set the duplicate variation in percentage for Cu:
>>> Cu.setDuplicateVariation("10")
>>> Cu.getDuplicateVariation()
'10.00'
Create a Sample and receive:
>>> sample = new_sample([Cu])
Create a worksheet and assign the analyses:
>>> analyses = sample.getAnalyses(full_objects=True)
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.addAnalyses(analyses)
Add a duplicate for analysis Cu:
>>> worksheet.addDuplicateAnalyses(1)
[<DuplicateAnalysis at /plone/worksheets/WS-001/...
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.getAnalysis()
<Analysis at /plone/clients/client-1/W-0001/Cu>
>>> duplicate.getResultsRange()
{}
Set a result of 50 for the original analysis Cu:
>>> cu = analyses[0]
>>> cu.setResult(50)
>>> duplicate.getAnalysis().getResult()
'50'
>>> result_range = duplicate.getResultsRange()
>>> (result_range.min, result_range.max)
('45.0', '55.0')
We can set a result for the duplicate within the range:
>>> duplicate.setResult(47)
>>> is_out_of_range(duplicate)
(False, False)
Or an out-of-range result:
>>> duplicate.setResult(42)
>>> is_out_of_range(duplicate)
(True, True)
We can do same exercise, but the other way round. We can submit the result for
the duplicate first:
>>> sample = new_sample([Cu])
>>> cu = sample.getAnalyses(full_objects=True)[0]
>>> worksheet.addAnalyses([cu])
We add a duplicate for new analysis, that is located at slot number 3:
>>> worksheet.addDuplicateAnalyses(src_slot=3)
[<DuplicateAnalysis at /plone/worksheets/WS-001/...
>>> duplicate = worksheet.getDuplicateAnalyses()
>>> duplicate = filter(lambda dup: dup.getAnalysis() == cu, duplicate)[0]
>>> duplicate.getAnalysis()
<Analysis at /plone/clients/client-1/W-0002/Cu>
>>> duplicate.getResultsRange()
{}
We set the result for the duplicate first, but it does not have a valid
result range because the original analysis has no result yet:
>>> duplicate.setResult(58)
>>> duplicate.getResultsRange()
{}
>>> is_out_of_range(duplicate)
(False, False)
>>> cu.setResult(50)
>>> result_range = duplicate.getResultsRange()
>>> (result_range.min, result_range.max)
('45.0', '55.0')
>>> is_out_of_range(duplicate)
(True, True)
Duplicate of an analysis with result options
Let’s add some results options to service Fe:
>>> results_options = [
... {"ResultValue": "1", "ResultText": "Number 1"},
... {"ResultValue": "2", "ResultText": "Number 2"},
... {"ResultValue": "3", "ResultText": "Number 3"}]
>>> Fe.setResultOptions(results_options)
>>> Fe.getResultOptions()
[{'ResultValue': '1', 'ResultText': 'Number 1'}, {'ResultValue': '2', 'ResultText': 'Number 2'}, {'ResultValue': '3', 'ResultText': 'Number 3'}]
Create a Sample and receive:
>>> sample = new_sample([Fe])
Create a worksheet and assign the analyses:
>>> analyses = sample.getAnalyses(full_objects=True)
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.addAnalyses(analyses)
Add a duplicate for analysis Fe:
>>> worksheet.addDuplicateAnalyses(1)
[<DuplicateAnalysis at /plone/worksheets/WS-002/...
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> fe = duplicate.getAnalysis()
>>> fe
<Analysis at /plone/clients/client-1/W-0003/Fe>
>>> duplicate.getResultsRange()
{}
Set a result for original analysis:
>>> fe.setResult(2)
>>> fe.getResult()
'2'
>>> fe.getFormattedResult()
'Number 2'
The result range for duplicate does not longer consider duplicate variation,
rather expects an exact result:
>>> duplicate.getResultsRange()
{}
>>> duplicate.setResult(1)
>>> duplicate.getResult()
'1'
>>> duplicate.getFormattedResult()
'Number 1'
>>> duplicate.getResultsRange()
{}
>>> is_out_of_range(duplicate)
(True, True)
>>> duplicate.setResult(2)
>>> duplicate.getResultsRange()
{}
>>> is_out_of_range(duplicate)
(False, False)
>>> duplicate.setResult(3)
>>> duplicate.getResultsRange()
{}
>>> is_out_of_range(duplicate)
(True, True)
Duplicate of an analysis with string results enabled
Let’s add make the analysis Au to accept string results:
>>> Au.setStringResult(True)
Create a Sample and receive:
>>> sample = new_sample([Au])
Create a worksheet and assign the analyses:
>>> analyses = sample.getAnalyses(full_objects=True)
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.addAnalyses(analyses)
Add a duplicate for analysis Au:
>>> worksheet.addDuplicateAnalyses(1)
[<DuplicateAnalysis at /plone/worksheets/WS-003/...
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> au = duplicate.getAnalysis()
>>> au
<Analysis at /plone/clients/client-1/W-0004/Au>
>>> duplicate.getStringResult()
True
>>> duplicate.getResultsRange()
{}
Submit a string result for original analysis:
>>> au.setResult("Positive")
>>> au.getResult()
'Positive'
>>> au.getFormattedResult()
'Positive'
The result range for duplicate does not longer consider duplicate variation,
rather expects an exact result:
>>> duplicate.getResultsRange()
{}
>>> duplicate.setResult("Negative")
>>> duplicate.getResult()
'Negative'
>>> duplicate.getFormattedResult()
'Negative'
>>> duplicate.getResultsRange()
{}
>>> is_out_of_range(duplicate)
(True, True)
>>> duplicate.setResult("Positive")
>>> duplicate.getResultsRange()
{}
>>> is_out_of_range(duplicate)
(False, False)
But when we submit a numeric result for an analysis with string result enabled,
the system will behave as if it was indeed, a numeric result:
>>> Au.setDuplicateVariation("10")
>>> Au.getDuplicateVariation()
'10.00'
>>> Au.getStringResult()
True
>>> sample = new_sample([Au])
>>> au = sample.getAnalyses(full_objects=True)[0]
>>> worksheet.addAnalyses([au])
We add a duplicate for new analysis, that is located at slot number 3:
>>> worksheet.addDuplicateAnalyses(src_slot=3)
[<DuplicateAnalysis at /plone/worksheets/WS-003/...
>>> duplicate = worksheet.getDuplicateAnalyses()
>>> duplicate = filter(lambda dup: dup.getAnalysis() == au, duplicate)[0]
>>> duplicate.getAnalysis()
<Analysis at /plone/clients/client-1/W-0005/Au>
>>> duplicate.getStringResult()
True
>>> duplicate.getResultsRange()
{}
And we set a numeric result:
>>> au.setResult(50)
>>> results_range = duplicate.getResultsRange()
>>> (results_range.min, results_range.max)
('45.0', '55.0')
Dynamic Analysis Specifications
A Dynamic Analysis Specification can be assigned to Analysis Specifications.
When retrieving the result ranges (specification) for an Analysis, a lookup is
done on the Dynamic Analysis Specification.
Example
Given is an Excel with the following minimal set of columns:
——- ——– — —
Keyword Method min max
——- ——– — —
Ca Method A 1 2
Ca Method B 3 4
Mg Method A 5 6
Mg Method B 7 8
——- ——– — —
This Excel is uploaded to an Dynamic Analysis Specification object, which is
linked to an Analysis Specification for the Sample Type “Water”.
A new “Water” Sample is created with an containing H2O analysis to be tested
with Method-2. The results range selected will be [3;4].
Running this test from the buildout directory:
bin/test test_textual_doctests -t DynamicAnalysisSpec.rst
Test Setup
Needed imports:
>>> from DateTime import DateTime
>>> from six import StringIO
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from openpyxl import Workbook
>>> from openpyxl.writer.excel import save_virtual_workbook
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> from plone.namedfile.file import NamedBlobFile
>>> import csv
Some Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = api.get_setup()
Functional Helpers:
>>> def new_sample(services, specification=None, results_ranges=None):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': DateTime().strftime("%Y-%m-%d"),
... 'SampleType': sampletype.UID(),
... 'Analyses': map(api.get_uid, services),
... 'Specification': specification or None }
...
... ar = create_analysisrequest(client, request, values)
... transitioned = do_action_for(ar, "receive")
... return ar
Privileges:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Creating a Dynamic Analysis Specification
Dynamic Analysis Specifications are actually only small wrappers around an Excel
file, where result ranges are defined per row.
Let’s create first a small helper function that generates an Excel for us:
>>> def to_excel(data):
... workbook = Workbook()
... first_sheet = workbook.get_active_sheet()
... reader = csv.reader(StringIO(data))
... for row in reader:
... first_sheet.append(row)
... return NamedBlobFile(save_virtual_workbook(workbook))
Then we create the data according to the example given above:
>>> data = """Keyword,Method,min,max
... Ca,Method A,1,2
... Ca,Method B,3,4
... Mg,Method A,5,6
... Mg,Method B,7,8"""
Now we can create a Dynamic Analysis Specification Object:
>>> ds = api.create(setup.dynamic_analysisspecs, "DynamicAnalysisSpec")
>>> ds.specs_file = to_excel(data)
We can get now directly the parsed header:
>>> header = ds.get_header()
>>> header
[u'Keyword', u'Method', u'min', u'max']
And the result ranges:
>>> rr = ds.get_specs()
>>> map(lambda r: [r.get(k) for k in header], rr)
[[u'Ca', u'Method A', u'1', u'2'], [u'Ca', u'Method B', u'3', u'4'], [u'Mg', u'Method A', u'5', u'6'], [u'Mg', u'Method B', u'7', u'8']]
We can also get the specs by Keyword:
>>> mg_rr = ds.get_by_keyword()["Mg"]
>>> map(lambda r: [r.get(k) for k in header], mg_rr)
[[u'Mg', u'Method A', u'5', u'6'], [u'Mg', u'Method B', u'7', u'8']]
Hooking in a Dynamic Analysis Specification
Dynamic Analysis Specifications can only be assigned to a default Analysis Specification.
First we build some basic setup structure:
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> method_a = api.create(portal.methods, "Method", title="Method A")
>>> method_b = api.create(portal.methods, "Method", title="Method B")
>>> Ca = api.create(setup.bika_analysisservices, "AnalysisService", title="Calcium", Keyword="Ca", Category=category, Method=method_a)
>>> Mg = api.create(setup.bika_analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg", Category=category, Method=method_a)
Then we create a default Analysis Specification:
>>> rr1 = {"keyword": "Ca", "min": 10, "max": 20, "warn_min": 9, "warn_max": 21}
>>> rr2 = {"keyword": "Mg", "min": 10, "max": 20, "warn_min": 9, "warn_max": 21}
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="H2O")
>>> specification = api.create(setup.bika_analysisspecs, "AnalysisSpec", title="Lab Water Spec", SampleType=sampletype.UID(), ResultsRange=[rr1, rr2])
And create a new sample with the given Analyses and the Specification:
>>> services = [Ca, Mg]
>>> sample = new_sample(services, specification=specification)
>>> ca, mg = sample["Ca"], sample["Mg"]
The specification is according to the values we have set before:
>>> ca_spec = ca.getResultsRange()
>>> ca_spec["min"], ca_spec["max"]
(10, 20)
>>> mg_spec = mg.getResultsRange()
>>> mg_spec["min"], mg_spec["max"]
(10, 20)
Now we hook in our Dynamic Analysis Specification to the standard Specification:
>>> specification.setDynamicAnalysisSpec(ds)
The specification need to get unset/set again, so that the dynamic values get looked up:
>>> sample.setSpecification(None)
>>> sample.setSpecification(specification)
The specification of the Ca Analysis with the Method Method A:
>>> ca_spec = ca.getResultsRange()
>>> ca_spec["min"], ca_spec["max"]
('1', '2')
Now let’s change the Ca Analysis Method to Method B:
>>> ca.setMethod(method_b)
Unset and set the specification again:
>>> sample.setSpecification(None)
>>> sample.setSpecification(specification)
And get the results range again:
>>> ca_spec = ca.getResultsRange()
>>> ca_spec["min"], ca_spec["max"]
('3', '4')
The same now with the Mg Analysis in one run:
>>> mg_spec = mg.getResultsRange()
>>> mg_spec["min"], mg_spec["max"]
('5', '6')
>>> mg.setMethod(method_b)
Unset and set the specification again:
>>> sample.setSpecification(None)
>>> sample.setSpecification(specification)
>>> mg_spec = mg.getResultsRange()
>>> mg_spec["min"], mg_spec["max"]
('7', '8')
History Aware Reference Field
This field behaves almost the same like the standard AT ReferenceField, but
stores the version of the reference object on set and keeps that version.
Currently, only analyses uses that field to store the exact version of their
calculation. This ensures that later changes in, e.g. the formula, does not
affect already created analyses.
Running this test from the buildout directory:
bin/test test_textual_doctests -t HistoryAwareReferenceField
Test Setup
Needed Imports:
>>> from bika.lims import api
>>> from bika.lims.api.security import *
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_sample(services):
... values = {
... "Client": client.UID(),
... "Contact": contact.UID(),
... "DateSampled": date_now,
... "SampleType": sampletype.UID()}
... service_uids = map(api.get_uid, services)
... return create_analysisrequest(client, request, values, service_uids)
>>> def get_analysis(sample, id):
... ans = sample.getAnalyses(getId=id, full_objects=True)
... if len(ans) != 1:
... return None
... return ans[0]
Environment Setup
Setup the testing environment:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
>>> setRoles(portal, TEST_USER_ID, ['LabManager', ])
>>> user = api.get_current_user()
LIMS Setup
Setup the Lab for testing:
>>> setup.setSelfVerificationEnabled(True)
>>> analysisservices = setup.bika_analysisservices
>>> calculations = setup.bika_calculations
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH")
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="Water")
Content Setup
Create some Analysis Services with unique Keywords:
>>> Ca = api.create(analysisservices, "AnalysisService", title="Calcium", Keyword="Ca")
>>> Mg = api.create(analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg")
>>> TH = api.create(analysisservices, "AnalysisService", title="Total Hardness", Keyword="TH")
Create a calculation for Total Hardness:
>>> calc = api.create(calculations, "Calculation", title="Total Hardness")
The Formula field references the keywords from Analysis Services:
>>> calc.setFormula("[Ca] + [Mg]")
>>> calc.processForm()
>>> calc.getFormula()
'[Ca] + [Mg]'
>>> calc.getMinifiedFormula()
'[Ca] + [Mg]'
Set the calculation to the TH analysis service:
>>> TH.setCalculation(calc)
Create an new Sample:
>>> sample = new_sample([Ca, Mg, TH])
Get the TH analysis:
>>> th = get_analysis(sample, "TH")
The calculation of the analysis should be unchanged:
>>> th_calc = th.getCalculation()
>>> th_calc.getFormula()
'[Ca] + [Mg]'
Now we change the calculation formula:
>>> calc.setFormula("2 * ([Ca] + [Mg])")
>>> calc.getFormula()
'2 * ([Ca] + [Mg])'
>>> calc.processForm()
The calculation of the analysis should be unchanged:
>>> th_calc = th.getCalculation()
>>> th_calc.getFormula()
'[Ca] + [Mg]'
ID Server
The ID Server in SENAITE LIMS provides IDs for content items base of the given
format specification. The format string is constructed in the same way as a
python format() method based predefined variables per content type. The only
variable available to all type is ‘seq’. Currently, seq can be constructed
either using number generator or a counter of existing items. For generated IDs,
one can specifypoint at which the format string will be split to create the
generator key. For counter IDs, one must specify context and the type of counter
which is either the number of backreferences or the number of contained objects.
Configuration Settings:
* format:
- a python format string constructed from predefined variables like client,
sampleType.
- special variable ‘seq’ must be positioned last in the format string
- sequence type: [generated|counter]
- context: if type counter, provides context the counting function
- counter type: [backreference|contained]
- counter reference: a parameter to the counting function
- prefix: default prefix if none provided in format string
- split length: the number of parts to be included in the prefix
ToDo:
* validation of format strings
Running this test from the buildout directory:
Test Setup
Needed Imports:
>>> import transaction
>>> from DateTime import DateTime
>>> from plone import api as ploneapi
>>> from zope.component import getUtility
>>> from senaite.core.idserver import alphanumber as alpha
>>> from bika.lims import api
>>> from senaite.core import idserver
>>> from senaite.core.interfaces import INumberGenerator
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
Variables:
>>> date_now = timestamp()
>>> year = date_now.split('-')[0][2:]
>>> sample_date = DateTime(2017, 1, 31)
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
>>> bika_sampletypes = setup.bika_sampletypes
>>> bika_samplepoints = setup.bika_samplepoints
>>> bika_analysiscategories = setup.bika_analysiscategories
>>> bika_analysisservices = setup.bika_analysisservices
>>> bika_labcontacts = setup.bika_labcontacts
>>> bika_storagelocations = setup.bika_storagelocations
>>> bika_samplingdeviations = setup.bika_samplingdeviations
>>> bika_sampleconditions = setup.bika_sampleconditions
>>> portal_url = portal.absolute_url()
>>> setup_url = portal_url + "/bika_setup"
>>> browser = self.getBrowser()
>>> current_user = ploneapi.user.get_current()
Test user:
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
Analysis Requests (AR)
An AnalysisRequest can only be created inside a Client:
>>> clients = self.portal.clients
>>> client = api.create(clients, "Client", Name="RIDING BYTES", ClientID="RB")
>>> client
<...client-1>
To create a new AR, a Contact is needed:
>>> contact = api.create(client, "Contact", Firstname="Ramon", Surname="Bartl")
>>> contact
<...contact-1>
A SampleType defines how long the sample can be retained, the minimum volume
needed, if it is hazardous or not, the point where the sample was taken etc.:
>>> sampletype = api.create(bika_sampletypes, "SampleType", Prefix="water")
>>> sampletype
<...sampletype-1>
A SamplePoint defines the location, where a Sample was taken:
>>> samplepoint = api.create(bika_samplepoints, "SamplePoint", title="Lake of Constance")
>>> samplepoint
<...samplepoint-1>
An AnalysisCategory categorizes different AnalysisServices:
>>> analysiscategory = api.create(bika_analysiscategories, "AnalysisCategory", title="Water")
>>> analysiscategory
<...analysiscategory-1>
An AnalysisService defines a analysis service offered by the laboratory:
>>> analysisservice = api.create(bika_analysisservices, "AnalysisService",
... title="PH", Category=analysiscategory, Keyword="PH")
>>> analysisservice
<...analysisservice-1>
ID generation
IDs can contain alphanumeric or numeric numbers, depending on the provided
ID Server configuration.
Set up ID Server configuration:
>>> values = [
... {'form': '{sampleType}-{year}-{alpha:2a3d}',
... 'portal_type': 'AnalysisRequest',
... 'prefix': 'analysisrequest',
... 'sequence_type': 'generated',
... 'split_length': 1},
... {'form': 'BA-{year}-{seq:04d}',
... 'portal_type': 'Batch',
... 'prefix': 'batch',
... 'sequence_type': 'generated',
... 'split_length': 1,
... 'value': ''},
... ]
>>> setup.setIDFormatting(values)
An AnalysisRequest can be created:
>>> values = {'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': sample_date,
... 'DateSampled': sample_date,
... 'SampleType': sampletype.UID(),
... }
>>> ploneapi.user.grant_roles(user=current_user,roles = ['Sampler', 'LabClerk'])
>>> transaction.commit()
>>> service_uids = [analysisservice.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId() == "water-{}-AA001".format(year)
True
Create a second AnalysisRequest:
>>> values = {'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': sample_date,
... 'DateSampled': sample_date,
... 'SampleType': sampletype.UID(),
... }
>>> service_uids = [analysisservice.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId() == "water-{}-AA002".format(year)
True
Create a Batch:
>>> batches = self.portal.batches
>>> batch = api.create(batches, "Batch", ClientID="RB")
>>> batch.getId() == "BA-{}-0001".format(year)
True
Change ID formats and create new AnalysisRequest:
>>> values = [
... {'form': '{clientId}-{dateSampled:%Y%m%d}-{sampleType}-{seq:04d}',
... 'portal_type': 'AnalysisRequest',
... 'prefix': 'analysisrequest',
... 'sequence_type': 'generated',
... 'split_length': 1},
... {'form': 'BA-{year}-{seq:04d}',
... 'portal_type': 'Batch',
... 'prefix': 'batch',
... 'sequence_type': 'generated',
... 'split_length': 1,
... 'value': ''},
... ]
>>> setup.setIDFormatting(values)
>>> values = {'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': sample_date,
... 'DateSampled': sample_date,
... 'SampleType': sampletype.UID(),
... }
>>> service_uids = [analysisservice.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId()
'RB-20170131-water-0001'
Re-seed and create a new Batch:
>>> from zope.component import getUtility
>>> from senaite.core.interfaces import INumberGenerator
>>> ng = getUtility(INumberGenerator)
>>> seed = ng.set_number("batch-BA", 10)
>>> batch = api.create(batches, "Batch", ClientID="RB")
>>> batch.getId() == "BA-{}-0011".format(year)
True
Change ID formats and use alphanumeric ids:
>>> sampletype2 = api.create(bika_sampletypes, "SampleType", Prefix="WB")
>>> sampletype2
<...sampletype-2>
>>> values = [
... {'form': '{sampleType}-{alpha:3a1d}',
... 'portal_type': 'AnalysisRequest',
... 'prefix': 'analysisrequest',
... 'sequence_type': 'generated',
... 'split_length': 1},
... ]
>>> setup.setIDFormatting(values)
>>> values = {'SampleType': sampletype2.UID(),}
>>> service_uids = [analysisservice.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId()
'WB-AAA1'
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId()
'WB-AAA2'
Now generate 8 more ARs to force the alpha segment to change:
>>> for num in range(8):
... ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId()
'WB-AAB1'
And try now without separators:
>>> values = [
... {'form': '{sampleType}{alpha:3a1d}',
... 'portal_type': 'AnalysisRequest',
... 'prefix': 'analysisrequest',
... 'sequence_type': 'generated',
... 'split_length': 1},
... ]
>>> setup.setIDFormatting(values)
>>> values = {'SampleType': sampletype2.UID(),}
>>> service_uids = [analysisservice.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
The system continues after the previous ID, even if no separator is used:
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId()
'WBAAB3'
- Now generate 8 more ARs to force the alpha segment to change
>>> for num in range(8):
... ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId()
'WBAAC2'
TODO: Test the case when numbers are exhausted in a sequence!
IDs with Suffix
In SENAITE < 1.3.0 it was differentiated between an Analysis Request and a
Sample. The Analysis Request acted as a “holder” of a Sample and the ID
used to be the same as the holding Sample but with the suffix -R01.
This suffix was incremented, e.g. -R01 to -R02, when a retest was requested,
while keeping the ID of the previous part constant.
With SENAITE 1.3.0 there is no differentiation anymore between Analysis Request
and Sample. However, some labs might still want to follow the old ID scheme with
the suffix and incrementation of retests to keep their analysis reports in a
sane state.
Therefore, the ID Server also supports Suffixes and the logic to generated the
next suffix number for retests:
>>> values = [
... {'form': '{sampleType}-{year}-{seq:04d}-R01',
... 'portal_type': 'AnalysisRequest',
... 'prefix': 'analysisrequest',
... 'sequence_type': 'generated',
... 'split_length': 2},
... {'form': '{parent_base_id}-R{test_count:02d}',
... 'portal_type': 'AnalysisRequestRetest',
... 'prefix': 'analysisrequestretest',
... 'sequence_type': '',
... 'split_length': 1},
... ]
>>> setup.setIDFormatting(values)
Allow self-verification of results:
>>> setup.setSelfVerificationEnabled(True)
Create a new AnalysisRequest:
>>> values = {'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': sample_date,
... 'DateSampled': sample_date,
... 'SampleType': sampletype.UID(),
... }
>>> service_uids = [analysisservice.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId() == "water-{}-0001-R01".format(year)
True
Receive the Sample:
>>> do_action_for(ar, "receive")[0]
True
Submit and verify results:
>>> an = ar.getAnalyses(full_objects=True)[0]
>>> an.setResult(5)
>>> do_action_for(an, "submit")[0]
True
>>> do_action_for(an, "verify")[0]
True
The AR should benow in the state verified:
>>> api.get_workflow_status_of(ar)
'verified'
We can invalidate it now:
>>> do_action_for(ar, "invalidate")[0]
True
Now a retest was created with the same ID as the invalidated AR, but an
incremented suffix:
>>> retest = ar.getRetest()
>>> retest.getId() == "water-{}-0001-R02".format(year)
True
Submit and verify results of the retest:
>>> an = retest.getAnalyses(full_objects=True)[0]
>>> an.setResult(5)
>>> do_action_for(an, "submit")[0]
True
>>> do_action_for(an, "verify")[0]
True
The Retest should benow in the state verified:
>>> api.get_workflow_status_of(retest)
'verified'
We can invalidate it now:
>>> do_action_for(retest, "invalidate")[0]
True
Now a retest of the retest was created with the same ID as the invalidated AR,
but an incremented suffix:
>>> retest = retest.getRetest()
>>> retest.getId() == "water-{}-0001-R03".format(year)
True
ID Slicing
The ID slicing machinery that comes with ID Server takes into consideration both
wildcards (e.g “{SampleType}”) and separators (by default “-“):
>>> id_format = "AR-{sampleType}-{parentId}{alpha:3a2d}"
If default separator “-” is used, the segments generated are:
[“AR”, “{sampleType}”, “{parentId}”, “{alpha:3a2d}”]
>>> idserver.slice(id_format, separator="-", start=0, end=3)
'AR-{sampleType}-{parentId}'
>>> idserver.slice(id_format, separator="-", start=1, end=2)
'{sampleType}-{parentId}'
If no separator is used, note the segments generated are like follows:
[“AR-”, “{sampleType}”, “-”, “{parentId}”, “{alpha:3a2d}”]
>>> idserver.slice(id_format, separator="", start=0, end=3)
'AR-{sampleType}-'
>>> idserver.slice(id_format, separator="", start=1, end=2)
'{sampleType}-'
And if we use a separator other than “-”, we have the same result as before:
>>> idserver.slice(id_format, separator=".", start=0, end=3)
'AR-{sampleType}-'
>>> idserver.slice(id_format, separator=".", start=1, end=2)
'{sampleType}-'
Unless we define an ID format in accordance:
>>> id_format = "AR.{sampleType}.{parentId}{alpha:3a2d}"
So we get the same results as the beginning:
>>> idserver.slice(id_format, separator=".", start=0, end=3)
'AR.{sampleType}.{parentId}'
>>> idserver.slice(id_format, separator=".", start=1, end=2)
'{sampleType}.{parentId}'
If we define an ID format without separator, the result will always be the same
regardless of setting a separator as a parm or not:
>>> id_format = "AR{sampleType}{parentId}{alpha:3a2d}"
>>> idserver.slice(id_format, separator="-", start=0, end=3)
'AR{sampleType}{parentId}'
>>> idserver.slice(id_format, separator="", start=0, end=3)
'AR{sampleType}{parentId}'
>>> idserver.slice(id_format, separator="-", start=1, end=2)
'{sampleType}{parentId}'
Try now with a simpler and quite common ID:
>>> id_format = "WS-{seq:04d}"
>>> idserver.slice(id_format, separator="-", start=0, end=1)
'WS'
>>> id_format = "WS{seq:04d}"
>>> idserver.slice(id_format, separator="-", start=0, end=1)
'WS'
>>> idserver.slice(id_format, separator="", start=0, end=1)
'WS'
Number generator storage behavior for IDs with/without separators
Number generator machinery keeps track of the last IDs generated to:
- Make the creation of new IDs faster. The system does not need to find out the
last ID number generated for a given portal type by walking through all
objects each time an object is created.
- Allow to manually reseed the numbering through ng interface. Sometimes, the
lab wants an ID to start from a specific number, set manually.
These last-generated IDs are stored in annotation storage.
Set up ID Server configuration with an hyphen separated format and create an
Analysis Request:
>>> id_formatting = [
... {'form': 'NG-{sampleType}-{alpha:2a3d}',
... 'portal_type': 'AnalysisRequest',
... 'prefix': 'analysisrequest',
... 'sequence_type': 'generated',
... 'split_length': 2},
... ]
>>> setup.setIDFormatting(id_formatting)
>>> values = {'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': sample_date,
... 'DateSampled': sample_date,
... 'SampleType': sampletype.UID(),
... }
>>> service_uids = [analysisservice.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId()
'NG-water-AA001'
Check the ID was correctly seeded in storage:
>>> number_generator = getUtility(INumberGenerator)
>>> last_number = number_generator.get("analysisrequest-NG-water")
>>> alpha.to_decimal('AA001') == last_number
True
Create a new Analysis Request with same format and check again:
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId()
'NG-water-AA002'
>>> number_generator = getUtility(INumberGenerator)
>>> last_number = number_generator.get("analysisrequest-NG-water")
>>> alpha.to_decimal('AA002') == last_number
True
Do the same, but with an ID formatting without separators:
>>> id_formatting = [
... {'form': 'NG{sampleType}{alpha:2a3d}',
... 'portal_type': 'AnalysisRequest',
... 'prefix': 'analysisrequest',
... 'sequence_type': 'generated',
... 'split_length': 2},
... ]
>>> setup.setIDFormatting(id_formatting)
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId()
'NGwaterAA001'
Check if the ID was correctly seeded in storage:
>>> number_generator = getUtility(INumberGenerator)
>>> last_number = number_generator.get("analysisrequest-NGwater")
>>> alpha.to_decimal('AA001') == last_number
True
Create a new Analysis Request with same format and check again:
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar.getId()
'NGwaterAA002'
>>> number_generator = getUtility(INumberGenerator)
>>> last_number = number_generator.get("analysisrequest-NGwater")
>>> alpha.to_decimal('AA002') == last_number
True
Instrument Calibration, Certification and Validation
Instruments represent the physical gadgets of the lab.
Each instrument needs calibration from time to time, which can be done inhouse
or externally.
If an instrument is calibrated, an instrument certification is issued.
Certifications are only valid within a specified date range.
Instruments can also be validated by the lab personell for a given time.
Only valid instruments, which are not currently calibrated or validated are
available in the system and can be used to fetch results for analysis.
Running this test from the buildout directory:
bin/test -t InstrumentCalibrationCertificationAndValidation
Test Setup
>>> import transaction
>>> from DateTime import DateTime
>>> from plone import api as ploneapi
>>> from zope.lifecycleevent import modified
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from bika.lims.api import create
>>> portal = self.portal
>>> portal_url = portal.absolute_url()
>>> bika_setup = portal.bika_setup
>>> setRoles(portal, TEST_USER_ID, ['LabManager', 'Manager', 'Owner'])
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
>>> def get_workflows_for(context):
... # Returns a tuple of assigned workflows for the given context
... workflow = ploneapi.portal.get_tool("portal_workflow")
... return workflow.getChainFor(context)
>>> def get_workflow_status_of(context):
... # Returns the workflow status of the given context
... return ploneapi.content.get_state(context)
Instruments
All instruments live in the /bika_setup/bika_instruments folder:
>>> instruments = bika_setup.bika_instruments
>>> instrument1 = create(instruments, "Instrument", title="Instrument-1")
>>> instrument2 = create(instruments, "Instrument", title="Instrument-2")
>>> instrument3 = create(instruments, "Instrument", title="Instrument-3")
Instruments provide the IInstrument interface:
>>> from bika.lims.interfaces import IInstrument
>>> IInstrument.providedBy(instrument1)
True
Calibrations
Instrument calibrations live inside an instrument:
>>> calibration1 = create(instrument1, "InstrumentCalibration", title="Calibration-1")
>>> calibration2 = create(instrument2, "InstrumentCalibration", title="Calibration-2")
Calibrations provide the IInstrumentCalibration interface:
>>> from bika.lims.interfaces import IInstrumentCalibration
>>> IInstrumentCalibration.providedBy(calibration1)
True
Calibrations can be in progress or not, depending on the entered dates:
>>> calibration1.isCalibrationInProgress()
False
The DownFrom field specifies the start date of the calibration:
>>> calibration1.setDownFrom(DateTime())
The calibration shouldn’t be in progress with only this field set:
>>> calibration1.isCalibrationInProgress()
False
The DownTo field specifies the end date of the calibration:
>>> calibration1.setDownTo(DateTime() + 7) # In calibration for 7 days
With this valid date range, the calibration is in progress:
>>> calibration1.isCalibrationInProgress()
True
The instrument will return in 7 days:
>>> calibration1.getRemainingDaysInCalibration()
7
Only valid date ranges switch the calibration to “in progress”:
>>> calibration2.setDownFrom(DateTime() + 7)
>>> calibration2.setDownTo(DateTime())
>>> calibration2.isCalibrationInProgress()
False
>>> calibration2.getRemainingDaysInCalibration()
0
The instrument knows if a calibration is in progress:
>>> instrument1.isCalibrationInProgress()
True
>>> instrument2.isCalibrationInProgress()
False
Since multiple calibrations might be in place, the instrument needs to know
about the calibration which takes the longest time:
>>> calibration3 = create(instrument1, "InstrumentCalibration", title="Calibration-3")
>>> calibration3.setDownFrom(DateTime())
>>> calibration3.setDownTo(DateTime() + 365)
>>> instrument1.getLatestValidCalibration()
<InstrumentCalibration at /plone/bika_setup/bika_instruments/instrument-1/instrumentcalibration-3>
Only calibrations which are currently in progress are returned.
So if it would start tomorrow, it should not be returned:
>>> calibration3.setDownFrom(DateTime() + 1)
>>> calibration3.isCalibrationInProgress()
False
>>> instrument1.getLatestValidCalibration()
<InstrumentCalibration at /plone/bika_setup/bika_instruments/instrument-1/instrumentcalibration-1>
If all calibrations are dated in the future, it should return none:
>>> calibration1.setDownFrom(DateTime() + 1)
>>> calibration1.isCalibrationInProgress()
False
>>> instrument1.getLatestValidCalibration()
Instruments w/o any calibration should return no valid calibrations:
>>> instrument3.getLatestValidCalibration()
Calibration Certificates
Certification live inside an instrument:
>>> certification1 = create(instrument1, "InstrumentCertification", title="Certification-1")
>>> certification2 = create(instrument2, "InstrumentCertification", title="Certification-2")
Certifications provide the IInstrumentCertification interface:
>>> from bika.lims.interfaces import IInstrumentCertification
>>> IInstrumentCertification.providedBy(certification1)
True
Certifications can be in valid or not, depending on the entered dates:
>>> certification1.isValid()
False
The ValidFrom field specifies the start date of the certification:
>>> certification1.setValidFrom(DateTime())
The certification shouldn’t be valid with only this field set:
>>> certification1.isValid()
False
The ValidTo field specifies the expiration date of the certification:
>>> certification1.setValidTo(DateTime() + 7) # one week until expiration
With this valid date range, the certification is in valid:
>>> certification1.isValid()
True
For exactly 7 days:
>>> certification1.getDaysToExpire()
7
Or one week:
>>> certification1.getWeeksAndDaysToExpire()
(1, 0)
Only valid date ranges switch the certification to “valid”:
>>> certification2.setValidFrom(DateTime() + 7)
>>> certification2.setValidTo(DateTime())
>>> certification2.isValid()
False
>>> certification2.getDaysToExpire()
0
>>> certification2.getWeeksAndDaysToExpire()
(0, 0)
The instrument knows if a certification is valid/out of date:
>>> instrument1.isOutOfDate()
False
>>> instrument2.isOutOfDate()
True
Since multiple certifications might be in place, the instrument needs to know
about the certification with the longest validity:
>>> certification3 = create(instrument1, "InstrumentCertification", title="Certification-3")
>>> certification3.setValidFrom(DateTime())
>>> certification3.setValidTo(DateTime() + 365) # one year until expiration
>>> instrument1.getLatestValidCertification()
<InstrumentCertification at /plone/bika_setup/bika_instruments/instrument-1/instrumentcertification-3>
Only certifications which are valid are returned.
So if the validation would start tomorrow, it should not be returned:
>>> certification3.setValidFrom(DateTime() + 1)
>>> certification3.isValid()
False
>>> instrument1.getLatestValidCertification()
<InstrumentCertification at /plone/bika_setup/bika_instruments/instrument-1/instrumentcertification-1>
If all certifications are dated in the future, it shouldn’t be returned:
>>> certification1.setValidFrom(DateTime() + 1)
>>> certification1.setValidTo(DateTime() + 7)
>>> instrument1.getLatestValidCertification()
It should also marked as invalid:
>>> certification1.isValid()
False
But the days to expire are calculated until the ValidTo date from today.
Thus, the full 7 days are returned:
>>> certification1.getDaysToExpire()
7
Instruments w/o any certifications should also return no valid certifications:
>>> instrument3.getLatestValidCertification()
Certification Expiration Intervals
Besides the ValidFrom and ValidTo date range, users might also specify an ExpirationInterval,
which calculates the expiration date automatically on save.
Removing the ValidTo field makes the certificate invalid:
>>> certification1.setValidFrom(DateTime())
>>> certification1.setValidTo(None)
>>> certification1.isValid()
False
Setting an interval of 1 year (365 days):
>>> certification1.setExpirationInterval(365)
The interval takes now precedence over the ValidTo date, but only if the
custom setValidTo setter is called. This setter is always called when using
the edit form in Plone:
>>> certification1.setValidTo(None)
>>> certification1.isValid()
True
>>> certification1.getDaysToExpire()
365
Validation
Validations live inside an instrument:
>>> validation1 = create(instrument1, "InstrumentValidation", title="Validation-1")
>>> validation2 = create(instrument2, "InstrumentValidation", title="Validation-2")
Validations provide the IInstrumentValidation interface:
>>> from bika.lims.interfaces import IInstrumentValidation
>>> IInstrumentValidation.providedBy(validation1)
True
Validations can be in progress or not, depending on the entered dates:
>>> validation1.isValidationInProgress()
False
The DownFrom field specifies the start date of the validation:
>>> validation1.setDownFrom(DateTime())
The validation shouldn’t be in progress with only this field set:
>>> validation1.isValidationInProgress()
False
The DownTo field specifies the end date of the validation:
>>> validation1.setDownTo(DateTime() + 7) # Down for 7 days
With this valid date range, the calibration is in progress:
>>> validation1.isValidationInProgress()
True
The instrument will be available after 7 days:
>>> validation1.getRemainingDaysInValidation()
7
Since multiple validations might be in place, the instrument needs to know
about the validation which takes the longest time:
>>> validation3 = create(instrument1, "InstrumentValidation", title="Validation-3")
>>> validation3.setDownFrom(DateTime())
>>> validation3.setDownTo(DateTime() + 365)
>>> instrument1.getLatestValidValidation()
<InstrumentValidation at /plone/bika_setup/bika_instruments/instrument-1/instrumentvalidation-3>
Only validations which are currently in progress are returned.
So if it would start tomorrow, it should not be returned:
>>> validation3.setDownFrom(DateTime() + 1)
>>> validation3.isValidationInProgress()
False
>>> instrument1.getLatestValidValidation()
<InstrumentValidation at /plone/bika_setup/bika_instruments/instrument-1/instrumentvalidation-1>
If all validations are dated in the future, it should return none:
>>> validation1.setDownFrom(DateTime() + 1)
>>> validation1.isValidationInProgress()
False
>>> instrument1.getLatestValidValidation()
Instruments w/o any validation should return no valid validations:
>>> instrument3.getLatestValidValidation()
Instruments import interface
We are going to test all instruments import interfaces on this one doctest
1. These files can only be added on tests/files/instruments/
2. The filenames(files to be imported) have to have the same name as their
import data interface i.e
exportimport/instruments/generic/two_dimension.py would match with
tests/files/instruments/generic.two_dimension.csv and
exportimport/instruments/varian/vistapro/icp.py would match with
tests/files/instruments/varian.vistapro.icp.csv
The reason for the above filenaming is so that we can do
interface = varian.vistapro.icp
exec(‘from senaite.core.exportimport.instruments.{} import Import’.format(inteface))
LINE:225
- All the files would have the same SampleID/AR-ID
H2O-0001
- Same analyses and same results because they will be testing against the same AR
Ca = 0.0
Mg = 2.0
- To set DefaultResult to float 0.0 use get_result
example can be found at exportimport/instruments/varian/vistapro/icp.py
Running this test from the buildout directory:
bin/test test_textual_doctests -t InstrumentsImportInterface
Test Setup
Needed imports:
>>> import os
>>> import transaction
>>> from six import StringIO
>>> from Products.CMFCore.utils import getToolByName
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from DateTime import DateTime
>>> import codecs
>>> from senaite.core.exportimport import instruments
>>> from senaite.core.exportimport.instruments.abbott.m2000rt.m2000rt \
... import Abbottm2000rtTSVParser, Abbottm2000rtImporter
>>> from zope.publisher.browser import FileUpload, TestRequest
Functional helpers:
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> class TestFile(object):
... def __init__(self, file, filename='dummy.txt'):
... self.file = file
... self.headers = {}
... self.filename = filename
Variables:
>>> date_now = timestamp()
>>> portal = self.portal
>>> request = self.request
>>> bika_setup = portal.bika_setup
>>> bika_instruments = bika_setup.bika_instruments
>>> bika_sampletypes = bika_setup.bika_sampletypes
>>> bika_samplepoints = bika_setup.bika_samplepoints
>>> bika_analysiscategories = bika_setup.bika_analysiscategories
>>> bika_analysisservices = bika_setup.bika_analysisservices
>>> bika_calculations = bika_setup.bika_calculations
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager:
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Import test
Required steps: Create and receive Analysis Request for import test
An AnalysisRequest can only be created inside a Client, and it also requires a Contact and
a SampleType:
>>> clients = self.portal.clients
>>> client = api.create(clients, "Client", Name="NARALABS", ClientID="NLABS")
>>> client
<Client at /plone/clients/client-1>
>>> contact = api.create(client, "Contact", Firstname="Juan", Surname="Gallostra")
>>> contact
<Contact at /plone/clients/client-1/contact-1>
>>> sampletype = api.create(bika_sampletypes, "SampleType", Prefix="H2O", MinimumVolume="100 ml")
>>> sampletype
<SampleType at /plone/bika_setup/bika_sampletypes/sampletype-1>
Create an AnalysisCategory (which categorizes different AnalysisServices), and add to it an AnalysisService.
This service matches the service specified in the file from which the import will be performed:
>>> analysiscategory = api.create(bika_analysiscategories, "AnalysisCategory", title="Water")
>>> analysiscategory
<AnalysisCategory at /plone/bika_setup/bika_analysiscategories/analysiscategory-1>
>>> analysisservice1 = api.create(bika_analysisservices,
... "AnalysisService",
... title="HIV06ml",
... ShortTitle="hiv06",
... Category=analysiscategory,
... Keyword="HIV06ml")
>>> analysisservice1
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>
>>> analysisservice2 = api.create(bika_analysisservices,
... 'AnalysisService',
... title='Magnesium',
... ShortTitle='Mg',
... Category=analysiscategory,
... Keyword="Mg")
>>> analysisservice2
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-2>
>>> analysisservice3 = api.create(bika_analysisservices,
... 'AnalysisService',
... title='Calcium',
... ShortTitle='Ca',
... Category=analysiscategory,
... Keyword="Ca")
>>> analysisservice3
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-3>
>>> total_calc = api.create(bika_calculations, 'Calculation', title='TotalMagCal')
>>> total_calc.setFormula('[Mg] + [Ca]')
>>> analysisservice4 = api.create(bika_analysisservices, 'AnalysisService', title='THCaCO3', Keyword="THCaCO3")
>>> analysisservice4.setUseDefaultCalculation(False)
>>> analysisservice4.setCalculation(total_calc)
>>> analysisservice4
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-4>
>>> interim_calc = api.create(bika_calculations, 'Calculation', title='Test-Total-Pest')
>>> pest1 = {'keyword': 'pest1', 'title': 'Pesticide 1', 'value': 0, 'type': 'int', 'hidden': False, 'unit': ''}
>>> pest2 = {'keyword': 'pest2', 'title': 'Pesticide 2', 'value': 0, 'type': 'int', 'hidden': False, 'unit': ''}
>>> pest3 = {'keyword': 'pest3', 'title': 'Pesticide 3', 'value': 0, 'type': 'int', 'hidden': False, 'unit': ''}
>>> interims = [pest1, pest2, pest3]
>>> interim_calc.setInterimFields(interims)
>>> self.assertEqual(interim_calc.getInterimFields(), interims)
>>> interim_calc.setFormula('((([pest1] > 0.0) or ([pest2] > .05) or ([pest3] > 10.0) ) and "PASS" or "FAIL" )')
>>> analysisservice5 = api.create(bika_analysisservices, 'AnalysisService', title='Total Terpenes', Keyword="TotalTerpenes")
>>> analysisservice5.setUseDefaultCalculation(False)
>>> analysisservice5.setCalculation(interim_calc)
>>> analysisservice5.setInterimFields(interims)
>>> analysisservice5
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-5>
Create an AnalysisRequest with this AnalysisService and receive it:
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': date_now,
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()
... }
>>> service_uids = [analysisservice1.UID(),
... analysisservice2.UID(),
... analysisservice3.UID(),
... analysisservice4.UID(),
... analysisservice5.UID()
... ]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar
<AnalysisRequest at /plone/clients/client-1/H2O-0001>
>>> ar.getReceivedBy()
''
>>> wf = getToolByName(ar, 'portal_workflow')
>>> wf.doActionFor(ar, 'receive')
>>> ar.getReceivedBy()
'test_user_1_'
Instruments files path
Where testing files live:
>>> files_path = os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', 'files/instruments'))
>>> instruments_path = os.path.abspath(os.path.join(os.path.dirname( __file__ ), '../..', 'exportimport/instruments'))
>>> files = os.listdir(files_path)
>>> interfaces = []
>>> importer_filename = [] #List of tuples [(importer,filename),(importer, filename)]
>>> for fl in files:
... inst_interface = os.path.splitext(fl)[0]
... inst_path = '.'.join([inst_interface.replace('.', '/'), 'py'])
... if os.path.isfile(os.path.join(instruments_path, inst_path)):
... interfaces.append(inst_interface)
... importer_filename.append((inst_interface, fl))
... else:
... inst_path = '.'.join([fl.replace('.', '/'), 'py'])
... if os.path.isfile(os.path.join(instruments_path, inst_path)):
... interfaces.append(fl)
... importer_filename.append((fl, fl))
... else:
... self.fail('File {} found does match any import interface'.format(fl))
Availability of instrument interface
Check that the instrument interface is available:
>>> exims = []
>>> for exim_id in instruments.__all__:
... exims.append(exim_id)
>>> [f for f in interfaces if f not in exims]
[]
Assigning the Import Interface to an Instrument
Create an Instrument and assign to it the tested Import Interface:
>>> for inter in interfaces:
... title = inter.split('.')[0].title()
... instrument = api.create(bika_instruments, "Instrument", title=title)
... instrument.setImportDataInterface([inter])
... if instrument.getImportDataInterface() != [inter]:
... self.fail('Instrument Import Data Interface did not get set')
>>> for inter in importer_filename:
... exec('from senaite.core.exportimport.instruments.{} import Import'.format(inter[0]))
... filename = os.path.join(files_path, inter[1])
... data = open(filename, 'r').read()
... import_file = FileUpload(TestFile(StringIO(data), inter[1]))
... request = TestRequest(form=dict(
... submitted=True,
... artoapply='received_tobeverified',
... results_override='override',
... instrument_results_file=import_file,
... sample='requestid',
... instrument=''))
... context = self.portal
... results = Import(context, request)
... test_results = eval(results)
... #TODO: Test for interim fields on other files aswell
... analyses = ar.getAnalyses(full_objects=True)
... if 'Parsing file generic.two_dimension.csv' in test_results['log']:
... # Testing also for interim fields, only for `generic.two_dimension` interface
... # TODO: Test for - H2O-0001: calculated result for 'THCaCO3': '2.0'
... if 'Import finished successfully: 1 Samples and 3 results updated' not in test_results['log']:
... self.fail("Results Update failed")
... if "H2O-0001 result for 'TotalTerpenes:pest1': '1'" not in test_results['log']:
... self.fail("pest1 did not get updated")
... if "H2O-0001 result for 'TotalTerpenes:pest2': '1'" not in test_results['log']:
... self.fail("pest2 did not get updated")
... if "H2O-0001 result for 'TotalTerpenes:pest3': '1'" not in test_results['log']:
... self.fail("pest3 did not get updated")
... for an in analyses:
... if an.getKeyword() == 'TotalTerpenes':
... if an.getResult() != 'PASS':
... msg = "{}:Result did not get updated".format(an.getKeyword())
... self.fail(msg)
...
... elif 'Import finished successfully: 1 Samples and 2 results updated' not in test_results['log']:
... self.fail("Results Update failed")
...
... for an in analyses:
... if an.getKeyword() == 'Ca':
... if an.getResult() != '0.0':
... msg = "{}:Result did not get updated".format(an.getKeyword())
... self.fail(msg)
... if an.getKeyword() == 'Mg':
... if an.getResult() != '2.0':
... msg = "{}:Result did not get updated".format(an.getKeyword())
... self.fail(msg)
... if an.getKeyword() == 'THCaCO3':
... if an.getResult() != '2.0':
... msg = "{}:Result did not get updated".format(an.getKeyword())
... self.fail(msg)
...
... if 'Import' in globals():
... del Import
Internal Use of Samples and Analyses
Running this test from the buildout directory:
bin/test test_textual_doctests -t InternalUse
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IInternalUse
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.utils.analysisrequest import create_partition
>>> from bika.lims.subscribers.analysisrequest import gather_roles_for_permission
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from Products.CMFCore import permissions
>>> from zope.lifecycleevent import modified
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_sample(services, internal_use=False):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID(),
... 'InternalUse': internal_use,}
... service_uids = map(api.get_uid, services)
... return create_analysisrequest(client, request, values, service_uids)
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(setup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Set a Sample for internal use
Create a Sample for non internal use:
>>> sample = new_sample([Cu, Fe, Au])
>>> transitioned = do_action_for(sample, "receive")
>>> sample.getInternalUse()
False
>>> IInternalUse.providedBy(sample)
False
>>> internals = map(IInternalUse.providedBy, sample.getAnalyses(full_objects=True))
>>> any(internals)
False
Client contact does have access to this Sample:
>>> "Owner" in gather_roles_for_permission(permissions.View, sample)
True
>>> "Owner" in gather_roles_for_permission(permissions.ListFolderContents, sample)
True
>>> "Owner" in gather_roles_for_permission(permissions.AccessContentsInformation, sample)
True
Set the sample for internal use:
>>> sample.setInternalUse(True)
>>> modified(sample)
>>> sample.getInternalUse()
True
>>> IInternalUse.providedBy(sample)
True
>>> internals = map(IInternalUse.providedBy, sample.getAnalyses(full_objects=True))
>>> all(internals)
True
Client contact does not have access to this Sample anymore:
>>> "Owner" in gather_roles_for_permission(permissions.View, sample)
False
>>> "Owner" in gather_roles_for_permission(permissions.ListFolderContents, sample)
False
>>> "Owner" in gather_roles_for_permission(permissions.AccessContentsInformation, sample)
False
Even if we submit results and sample is transitioned thereafter:
>>> for analysis in sample.getAnalyses(full_objects=True):
... analysis.setResult(12)
... success = do_action_for(analysis, "submit")
>>> api.get_workflow_status_of(sample)
'to_be_verified'
>>> sample.getInternalUse()
True
>>> IInternalUse.providedBy(sample)
True
>>> internals = map(IInternalUse.providedBy, sample.getAnalyses(full_objects=True))
>>> all(internals)
True
>>> "Owner" in gather_roles_for_permission(permissions.View, sample)
False
>>> "Owner" in gather_roles_for_permission(permissions.ListFolderContents, sample)
False
>>> "Owner" in gather_roles_for_permission(permissions.AccessContentsInformation, sample)
False
Creation of a Sample for internal use
Create a Sample for internal use:
>>> sample = new_sample([Cu, Fe, Au], internal_use=True)
>>> transitioned = do_action_for(sample, "receive")
>>> modified(sample)
>>> sample.getInternalUse()
True
>>> IInternalUse.providedBy(sample)
True
>>> internals = map(IInternalUse.providedBy, sample.getAnalyses(full_objects=True))
>>> all(internals)
True
Client contact does not have access to this Sample:
>>> "Owner" in gather_roles_for_permission(permissions.View, sample)
False
>>> "Owner" in gather_roles_for_permission(permissions.ListFolderContents, sample)
False
>>> "Owner" in gather_roles_for_permission(permissions.AccessContentsInformation, sample)
False
Creation of a Partition for internal use
Create a Sample for non internal use:
>>> sample = new_sample([Cu, Fe, Au])
>>> transitioned = do_action_for(sample, "receive")
Create two partitions, the first for internal use:
>>> analyses = sample.getAnalyses(full_objects=True)
>>> part1 = create_partition(sample, request, analyses[2:], internal_use=True)
>>> part2 = create_partition(sample, request, analyses[:2], internal_use=False)
>>> IInternalUse.providedBy(part1)
True
>>> IInternalUse.providedBy(part2)
False
>>> IInternalUse.providedBy(sample)
False
Submit results for partition 2 (non-internal-use):
>>> for analysis in part2.getAnalyses(full_objects=True):
... analysis.setResult(12)
... success = do_action_for(analysis, "submit")
>>> api.get_workflow_status_of(part2)
'to_be_verified'
Since partition 1 is labelled for internal use, the primary sample has been
automatically transitioned too:
>>> api.get_workflow_status_of(sample)
'to_be_verified'
While partition 1 remains in “received” status:
>>> api.get_workflow_status_of(part1)
'sample_received'
Listings
Running this test from the buildout directory:
bin/test test_textual_doctests -t Listings
Test Setup
Imports:
>>> from operator import methodcaller
>>> from DateTime import DateTime
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from plone import api as ploneapi
Functional Helpers:
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def create_ar(client, **kw):
... values = {}
... services = []
... for k, v in kw.items():
... if k == "Services":
... services = map(api.get_uid, v)
... elif api.is_object(v):
... values[k] = api.get_uid(v)
... else:
... values[k] = v
... return create_analysisrequest(client, self.request, values, services)
Variables:
>>> date_now = timestamp()
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
Test User:
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager', 'Sampler'])
Prepare Test Environment
Setupitems:
>>> clients = portal.clients
>>> sampletypes = setup.bika_sampletypes
>>> samplepoints = setup.bika_samplepoints
>>> analysiscategories = setup.bika_analysiscategories
>>> analysisservices = setup.bika_analysisservices
>>> setup.setSamplingWorkflowEnabled(True)
Create Clients:
>>> cl1 = api.create(clients, "Client", Name="Client1", ClientID="C1")
>>> cl2 = api.create(clients, "Client", Name="Client2", ClientID="C2")
>>> cl3 = api.create(clients, "Client", Name="Client3", ClientID="C3")
Create some Contact(s):
>>> c1 = api.create(cl1, "Contact", Firstname="Client", Surname="1")
>>> c2 = api.create(cl2, "Contact", Firstname="Client", Surname="2")
>>> c3 = api.create(cl3, "Contact", Firstname="Client", Surname="3")
Create some Sample Types:
>>> st1 = api.create(sampletypes, "SampleType", Prefix="s1", MinimumVolume="100 ml")
>>> st2 = api.create(sampletypes, "SampleType", Prefix="s2", MinimumVolume="200 ml")
>>> st3 = api.create(sampletypes, "SampleType", Prefix="s3", MinimumVolume="300 ml")
Create some Sample Points:
>>> sp1 = api.create(samplepoints, "SamplePoint", title="Sample Point 1")
>>> sp2 = api.create(samplepoints, "SamplePoint", title="Sample Point 2")
>>> sp3 = api.create(samplepoints, "SamplePoint", title="Sample Point 3")
Create some Analysis Categories:
>>> ac1 = api.create(analysiscategories, "AnalysisCategory", title="Analysis Category 1")
>>> ac2 = api.create(analysiscategories, "AnalysisCategory", title="Analysis Category 2")
>>> ac3 = api.create(analysiscategories, "AnalysisCategory", title="Analysis Category 3")
Create some Analysis Services:
>>> as1 = api.create(analysisservices, "AnalysisService", title="Analysis Service 1", ShortTitle="AS1", Category=ac1, Keyword="AS1", Price="10")
>>> as2 = api.create(analysisservices, "AnalysisService", title="Analysis Service 2", ShortTitle="AS1", Category=ac2, Keyword="AS1", Price="20")
>>> as3 = api.create(analysisservices, "AnalysisService", title="Analysis Service 3", ShortTitle="AS1", Category=ac3, Keyword="AS1", Price="30")
Create some Analysis Requests:
>>> ar11 = create_ar(cl1, Contact=c1, SamplingDate=date_now, DateSampled=date_now, SampleType=st1, Priority='1', Services=[as1])
>>> ar12 = create_ar(cl1, Contact=c1, SamplingDate=date_now, DateSampled=date_now, SampleType=st1, Priority='2', Services=[as1])
>>> ar13 = create_ar(cl1, Contact=c1, SamplingDate=date_now, DateSampled=date_now, SampleType=st1, Priority='3', Services=[as1])
>>> ar21 = create_ar(cl2, Contact=c2, SamplingDate=date_now, DateSampled=date_now, SampleType=st2, Priority='1', Services=[as2])
>>> ar22 = create_ar(cl2, Contact=c2, SamplingDate=date_now, DateSampled=date_now, SampleType=st2, Priority='2', Services=[as2])
>>> ar23 = create_ar(cl2, Contact=c2, SamplingDate=date_now, DateSampled=date_now, SampleType=st2, Priority='3', Services=[as2])
>>> ar31 = create_ar(cl3, Contact=c3, SamplingDate=date_now, DateSampled=date_now, SampleType=st3, Priority='1', Services=[as3])
>>> ar32 = create_ar(cl3, Contact=c3, SamplingDate=date_now, DateSampled=date_now, SampleType=st3, Priority='2', Services=[as3])
>>> ar33 = create_ar(cl3, Contact=c3, SamplingDate=date_now, DateSampled=date_now, SampleType=st3, Priority='3', Services=[as3])
Listing View
>>> from senaite.app.listing.view import ListingView
>>> context = portal.samples
>>> request = self.request
>>> listing = ListingView(context, request)
>>> listing
<senaite.app.listing.view.ListingView object at 0x...>
Setup the view to behave like the SamplesView:
>>> from senaite.core.catalog import SAMPLE_CATALOG
>>> listing.catalog = SAMPLE_CATALOG
>>> listing.contentFilter = {
... 'sort_on': 'created',
... 'sort_order': 'reverse',
... 'path': {"query": "/", "level": 0},
... 'is_active': True,}
The listing view should now return all created ARs:
>>> results = listing.search()
>>> len(results)
9
Searching for a value should work:
>>> results = listing.search(searchterm="s1")
>>> len(results)
3
>>> map(lambda x: x.getObject().getSampleType().getPrefix(), results)
['s1', 's1', 's1']
>>> results = listing.search(searchterm="C3")
>>> map(lambda x: x.getObject().getClient(), results)
[<Client at /plone/clients/client-3>, <Client at /plone/clients/client-3>, <Client at /plone/clients/client-3>]
Create SampleView:
>>> from senaite.core.browser.samples.view import SamplesView
>>> samples_view = SamplesView(context, request)
>>> samples_view
<senaite.core.browser.samples.view.SamplesView object at 0x...>
>>> samples_view.roles = ['Manager',]
>>> samples_view.member = ploneapi.user.get_current()
>>> items = samples_view.folderitems()
>>> len(items)
9
>>> 'getDateSampled' in items[0]
True
>>> 'getDateSampled' in items[0]['allow_edit']
True
>>> samples_view.columns['getDateSampled']['type']
'datetime'
Permissions
All objects in Bika LIMS are permission aware.
Therefore, only users with the right roles can view or edit contents.
Each role may contain one or more permissions.
Test Setup
>>> import os
>>> import transaction
>>> from plone import api as ploneapi
>>> from zope.lifecycleevent import modified
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from plone.app.testing import setRoles
>>> portal = self.portal
>>> portal_url = portal.absolute_url()
>>> bika_setup = portal.bika_setup
>>> bika_setup_url = portal_url + "/bika_setup"
>>> browser = self.getBrowser()
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def login(user=TEST_USER_ID, password=TEST_USER_PASSWORD):
... browser.open(portal_url + "/login_form")
... browser.getControl(name='__ac_name').value = user
... browser.getControl(name='__ac_password').value = password
... browser.getControl(name='buttons.login').click()
... assert("__ac_password" not in browser.contents)
>>> def logout():
... browser.open(portal_url + "/logout")
... assert("You are now logged out" in browser.contents)
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
>>> def create(container, portal_type, title=None):
... # Creates a content in a container and manually calls processForm
... title = title is None and "Test {}".format(portal_type) or title
... _ = container.invokeFactory(portal_type, id="tmpID", title=title)
... obj = container.get(_)
... obj.processForm()
... modified(obj) # notify explicitly for the test
... transaction.commit() # somehow the created method did not appear until I added this
... return obj
>>> def get_workflows_for(context):
... # Returns a tuple of assigned workflows for the given context
... workflow = ploneapi.portal.get_tool("portal_workflow")
... return workflow.getChainFor(context)
>>> def get_workflow_status_of(context):
... # Returns the workflow status of the given context
... return ploneapi.content.get_state(context)
Test Workflows and Permissions
Workflows control the allowed roles for specific permissions.
A role is a container for several permissions.
Bika Setup
Bika Setup is a folderish object, which handles the labs’ configuration items, like
Laboratory information, Instruments, Analysis Services etc.
Test Workflow
A bika_setup lives in the root of a bika installation, or more precisely, the
portal object:
>>> bika_setup = portal.bika_setup
The setup folder follows the senaite_setup_workflow and is initially in the
active state:
>>> get_workflows_for(bika_setup)
('senaite_setup_workflow',)
>>> get_workflow_status_of(bika_setup)
'active'
Test Permissions
Exactly these roles have should have a View permission:
>>> get_roles_for_permission("View", bika_setup)
['Authenticated']
Exactly these roles have should have the Access contents information permission:
>>> get_roles_for_permission("Access contents information", bika_setup)
['Authenticated']
Exactly these roles have should have the List folder contents permission:
>>> get_roles_for_permission("List folder contents", bika_setup)
['Authenticated']
Exactly these roles have should have the Modify portal content permission:
>>> get_roles_for_permission("Modify portal content", bika_setup)
['LabClerk', 'LabManager', 'Manager']
Exactly these roles (nobody) should have the Delete objects permission:
>>> get_roles_for_permission("Delete objects", bika_setup)
[]
Anonymous Browser Test
Ensure we are logged out:
Anonymous should not be able to view the bika_setup folder:
>>> browser.open(bika_setup.absolute_url() + "/base_view")
Traceback (most recent call last):
...
Unauthorized: ...
Anonymous should not be able to edit the bika_setup folder:
>>> browser.open(bika_setup.absolute_url() + "/base_edit")
Traceback (most recent call last):
...
Unauthorized: ...
Laboratory
The Laboratory object holds all needed information about the lab itself.
It lives inside the bika_setup folder.
Test Workflow
A laboratory lives in the root of a bika installation, or more precisely, the
portal object:
>>> laboratory = portal.bika_setup.laboratory
The laboratory folder follows the senaite_laboratory_workflow and is
initially in the active state:
>>> get_workflows_for(laboratory)
('senaite_laboratory_workflow',)
>>> get_workflow_status_of(laboratory)
'active'
Test Permissions
Exactly these roles have should have a View permission:
>>> get_roles_for_permission("View", laboratory)
['Authenticated']
Exactly these roles have should have the Access contents information permission:
>>> get_roles_for_permission("Access contents information", laboratory)
['Authenticated']
Exactly these roles have should have the List folder contents permission:
>>> get_roles_for_permission("List folder contents", laboratory)
['Authenticated']
Exactly these roles have should have the Modify portal content permission:
>>> get_roles_for_permission("Modify portal content", laboratory)
['LabClerk', 'LabManager', 'Manager']
Exactly these roles (nobody) should have the Delete objects permission:
>>> get_roles_for_permission("Delete objects", laboratory)
[]
Anonymous Browser Test
Ensure we are logged out:
- ~~
TODO: Fails with LocationError: (<UnauthorizedBinding: context>, ‘main_template’)
Anonymous should not be able to view the laboratory folder:
browser.open(laboratory.absolute_url() + "/base_view")
Traceback (most recent call last):
...
Unauthorized: ...
Anonymous should not be able to edit the laboratory folder:
>>> browser.open(laboratory.absolute_url() + "/base_edit")
Traceback (most recent call last):
...
Unauthorized: ...
Instrument(s)
Instruments represent the measuring hardware of the lab.
Test Workflow
A instrument lives in the bika_setup/bika_instruments folder:
>>> instruments = bika_setup.bika_instruments
>>> instrument = create(instruments, "Instrument")
The bika_instruments folder follows the senaite_one_state_workflow and is
initially in the active state:
>>> get_workflows_for(instruments)
('senaite_instruments_workflow',)
>>> get_workflow_status_of(instruments)
'active'
A instrument follows the senaite_deactivable_type_workflow and has an
initial state of active:
>>> get_workflows_for(instrument)
('senaite_deactivable_type_workflow',)
>>> get_workflow_status_of(instruments)
'active'
Test Permissions
Exactly these roles have should have a View permission:
>>> get_roles_for_permission("View", instruments)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'Preserver', 'Publisher', 'RegulatoryInspector', 'Sampler', 'SamplingCoordinator', 'Verifier']
>>> get_roles_for_permission("View", instrument)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'Preserver', 'Publisher', 'RegulatoryInspector', 'Sampler', 'SamplingCoordinator', 'Verifier']
Exactly these roles have should have the Access contents information permission:
>>> get_roles_for_permission("Access contents information", instruments)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'Preserver', 'Publisher', 'RegulatoryInspector', 'Sampler', 'SamplingCoordinator', 'Verifier']
>>> get_roles_for_permission("Access contents information", instrument)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'Preserver', 'Publisher', 'RegulatoryInspector', 'Sampler', 'SamplingCoordinator', 'Verifier']
Exactly these roles have should have the List folder contents permission:
>>> get_roles_for_permission("List folder contents", instruments)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'Preserver', 'Publisher', 'RegulatoryInspector', 'Sampler', 'SamplingCoordinator', 'Verifier']
>>> get_roles_for_permission("List folder contents", instrument)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'Preserver', 'Publisher', 'RegulatoryInspector', 'Sampler', 'SamplingCoordinator', 'Verifier']
Exactly these roles have should have the Modify portal content permission:
>>> get_roles_for_permission("Modify portal content", instruments)
['LabClerk', 'LabManager', 'Manager']
>>> get_roles_for_permission("Modify portal content", instrument)
['LabClerk', 'LabManager', 'Manager']
Exactly these roles have should have the Delete objects permission:
>>> get_roles_for_permission("Delete objects", instruments)
[]
>>> get_roles_for_permission("Delete objects", instrument)
[]
Anonymous Browser Test
Ensure we are logged out:
Anonymous should not be able to view the bika_instruments folder:
>>> browser.open(instruments.absolute_url() + "/base_view")
Traceback (most recent call last):
...
Unauthorized: ...
- ~~
TODO: Fails with LocationError: (<UnauthorizedBinding: context>, ‘main_template’)
Anonymous should not be able to view a instrument:
browser.open(instrument.absolute_url() + "/base_view")
Traceback (most recent call last):
...
Unauthorized: ...
Anonymous should not be able to edit the bika_instruments folder:
>>> browser.open(instruments.absolute_url() + "/base_edit")
Traceback (most recent call last):
...
Unauthorized: ...
Anonymous should not be able to edit a instrument:
>>> browser.open(instrument.absolute_url() + "/base_edit")
Traceback (most recent call last):
...
Unauthorized: ...
Method(s)
Methods describe the sampling methods of the lab.
Methods should be viewable by unauthenticated users for information purpose.
Test Workflow
A method lives in the methods folder:
>>> methods = portal.methods
>>> method = create(methods, "Method")
The methods folder follows the senaite_setup_workflow and is initially in
the active state:
>>> get_workflows_for(methods)
('senaite_setup_workflow',)
>>> get_workflow_status_of(methods)
'active'
A method follows the senaite_deactivable_type_workflow and has an initial
state of active:
>>> get_workflows_for(method)
('senaite_deactivable_type_workflow',)
>>> get_workflow_status_of(methods)
'active'
Test Permissions
Exactly these roles have should have a View permission:
>>> get_roles_for_permission("View", methods)
['Authenticated']
>>> get_roles_for_permission("View", method)
['Authenticated']
Exactly these roles have should have the Access contents information permission:
>>> get_roles_for_permission("Access contents information", methods)
['Authenticated']
>>> get_roles_for_permission("Access contents information", method)
['Authenticated']
Exactly these roles have should have the List folder contents permission:
>>> get_roles_for_permission("List folder contents", methods)
['Authenticated']
>>> get_roles_for_permission("List folder contents", method)
['Authenticated']
Exactly these roles have should have the Modify portal content permission:
>>> get_roles_for_permission("Modify portal content", methods)
['LabClerk', 'LabManager', 'Manager']
>>> get_roles_for_permission("Modify portal content", method)
['LabClerk', 'LabManager', 'Manager']
Exactly these roles have should have the Delete objects permission:
>>> get_roles_for_permission("Delete objects", methods)
[]
>>> get_roles_for_permission("Delete objects", method)
[]
Anonymous Browser Test
Ensure we are logged out:
Anonymous should not be able to view the methods folder:
>>> browser.open(methods.absolute_url() + "/base_view")
Traceback (most recent call last):
...
Unauthorized: ...
- ~~
TODO: Fails with LocationError: (<UnauthorizedBinding: context>, ‘main_template’)
Anonymous should not be able to view a method:
browser.open(method.absolute_url() + "/base_view")
Traceback (most recent call last):
...
Unauthorized: ...
Anonymous should not be able to edit the methods folder:
>>> browser.open(methods.absolute_url() + "/base_edit")
Traceback (most recent call last):
...
Unauthorized: ...
Anonymous should not be able to edit a method:
>>> browser.open(method.absolute_url() + "/base_edit")
Traceback (most recent call last):
...
Unauthorized: ...
Analysis Service(s)
Analysis services describe which “products” the lab offers.
Test Workflow
A analysisservice lives in the bika_setup/bika_analysisservices folder:
>>> analysisservices = bika_setup.bika_analysisservices
>>> analysisservice = create(analysisservices, "AnalysisService")
The bika_analysisservices folder follows the senaite_one_state_workflow
and is initially in the active state:
>>> get_workflows_for(analysisservices)
('senaite_one_state_workflow',)
>>> get_workflow_status_of(analysisservices)
'active'
A analysisservice follows the senaite_deactivable_type_workflow and has an
initial state of active:
>>> get_workflows_for(analysisservice)
('senaite_deactivable_type_workflow',)
>>> get_workflow_status_of(analysisservices)
'active'
Test Permissions
Exactly these roles have should have a View permission:
>>> get_roles_for_permission("View", analysisservices)
['Authenticated']
>>> get_roles_for_permission("View", analysisservice)
['Authenticated']
Exactly these roles have should have the Access contents information permission:
>>> get_roles_for_permission("Access contents information", analysisservices)
['Authenticated']
>>> get_roles_for_permission("Access contents information", analysisservice)
['Authenticated']
Exactly these roles have should have the List folder contents permission:
>>> get_roles_for_permission("List folder contents", analysisservices)
['Authenticated']
>>> get_roles_for_permission("List folder contents", analysisservice)
['Authenticated']
Exactly these roles have should have the Modify portal content permission:
>>> get_roles_for_permission("Modify portal content", analysisservices)
['LabClerk', 'LabManager', 'Manager']
>>> get_roles_for_permission("Modify portal content", analysisservice)
['LabClerk', 'LabManager', 'Manager']
Exactly these roles have should have the Delete objects permission:
>>> get_roles_for_permission("Delete objects", analysisservices)
[]
>>> get_roles_for_permission("Delete objects", analysisservice)
[]
Anonymous Browser Test
Ensure we are logged out:
Anonymous should not be able to view the bika_analysisservices folder:
>>> browser.open(analysisservices.absolute_url() + "/base_view")
Traceback (most recent call last):
...
Unauthorized: ...
- ~~
TODO: Fails with LocationError: (<UnauthorizedBinding: context>, ‘main_template’)
Anonymous are not allowed to view an analysisservice:
browser.open(analysisservice.absolute_url() + "/base_view")
Traceback (most recent call last):
...
Unauthorized: ...
Anonymous should not be able to edit the bika_analysisservices folder:
>>> browser.open(analysisservices.absolute_url() + "/base_edit")
Traceback (most recent call last):
...
Unauthorized: ...
Anonymous should not be able to edit a analysisservice:
>>> browser.open(analysisservice.absolute_url() + "/base_edit")
Traceback (most recent call last):
...
Unauthorized: ...
QC Analyses With Interim Fields On A Worksheet
Creating analysis that has interims fields so that we can test for
Reference Analyses(Blank and Control) that have interim fields.
Running this test from the buildout directory:
bin/test test_textual_doctests -t QCAnalysesWithInterimFieldsOnAWorksheet
Test Setup
Needed Imports:
>>> import re
>>> import transaction
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor
>>> from DateTime import DateTime
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from plone.app.testing import setRoles
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bika_setup = portal.bika_setup
>>> bikasetup = portal.bika_setup
>>> bika_analysisservices = bika_setup.bika_analysisservices
>>> bika_calculations = bika_setup.bika_calculations
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager', 'Analyst'])
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> interim_calc = api.create(bika_calculations, 'Calculation', title='Test-Total-Pest')
>>> pest1 = {'keyword': 'pest1', 'title': 'Pesticide 1', 'value': 12.3, 'type': 'int', 'hidden': False, 'unit': ''}
>>> pest2 = {'keyword': 'pest2', 'title': 'Pesticide 2', 'value': 14.89, 'type': 'int', 'hidden': False, 'unit': ''}
>>> pest3 = {'keyword': 'pest3', 'title': 'Pesticide 3', 'value': 16.82, 'type': 'int', 'hidden': False, 'unit': ''}
>>> interims = [pest1, pest2, pest3]
>>> interim_calc.setInterimFields(interims)
>>> self.assertEqual(interim_calc.getInterimFields(), interims)
>>> interim_calc.setFormula('((([pest1] > 0.0) or ([pest2] > .05) or ([pest3] > 10.0) ) and "FAIL" or "PASS" )')
>>> total_terpenes = api.create(bika_analysisservices, 'AnalysisService', title='Total Terpenes', Keyword="TotalTerpenes")
>>> total_terpenes.setUseDefaultCalculation(False)
>>> total_terpenes.setCalculation(interim_calc)
>>> total_terpenes.setInterimFields(interims)
>>> total_terpenes
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>
>>> service_uids = [total_terpenes.UID()]
Create a Reference Definition for blank:
>>> blankdef = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Blank definition", Blank=True)
>>> blank_refs = [{'uid': total_terpenes.UID(), 'result': '0', 'min': '0', 'max': '0'},]
>>> blankdef.setReferenceResults(blank_refs)
And for control:
>>> controldef = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Control definition")
>>> control_refs = [{'uid': total_terpenes.UID(), 'result': '10', 'min': '9.99', 'max': '10.01'},]
>>> controldef.setReferenceResults(control_refs)
>>> blank = api.create(supplier, "ReferenceSample", title="Blank",
... ReferenceDefinition=blankdef,
... Blank=True, ExpiryDate=date_future,
... ReferenceResults=blank_refs)
>>> control = api.create(supplier, "ReferenceSample", title="Control",
... ReferenceDefinition=controldef,
... Blank=False, ExpiryDate=date_future,
... ReferenceResults=control_refs)
Create an Analysis Request:
>>> sampletype_uid = api.get_uid(sampletype)
>>> values = {
... 'Client': api.get_uid(client),
... 'Contact': api.get_uid(contact),
... 'DateSampled': date_now,
... 'SampleType': sampletype_uid,
... 'Priority': '1',
... }
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar
<AnalysisRequest at /plone/clients/client-1/W-0001>
>>> success = doActionFor(ar, 'receive')
Create a new Worksheet and add the analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet", Analyst='test_user_1_')
>>> worksheet
<Worksheet at /plone/worksheets/WS-001>
>>> analyses = map(api.get_object, ar.getAnalyses())
>>> analysis = analyses[0]
>>> analysis
<Analysis at /plone/clients/client-1/W-0001/TotalTerpenes>
>>> worksheet.addAnalysis(analysis)
>>> analysis.getWorksheet().UID() == worksheet.UID()
True
Add a blank and a control:
>>> blanks = worksheet.addReferenceAnalyses(blank, service_uids)
>>> transaction.commit()
>>> blanks.sort(key=lambda analysis: analysis.getKeyword(), reverse=False)
>>> controls = worksheet.addReferenceAnalyses(control, service_uids)
>>> transaction.commit()
>>> controls.sort(key=lambda analysis: analysis.getKeyword(), reverse=False)
>>> transaction.commit()
>>> for analysis in worksheet.getAnalyses():
... if analysis.portal_type == 'ReferenceAnalysis':
... if analysis.getReferenceType() == 'b' or analysis.getReferenceType() == 'c':
... # 3 is the number of interim fields on the analysis/calculation
... if len(analysis.getInterimFields()) != 3:
... self.fail("Blank or Control Analyses interim field are not correct")
Removal of Analyses from an Analysis Request
Running this test from the buildout directory:
bin/test test_textual_doctests -t RemoveAnalysesFromAnalysisRequest
Test Setup
Needed Imports:
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... return ar
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
And set some settings:
>>> bikasetup.setSelfVerificationEnabled(True)
Remove Analyses from an Analysis Request not yet received
Create a new Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
And remove two analyses (Cu and Fe):
>>> ar.setAnalyses([Au])
>>> map(lambda an: an.getKeyword(), ar.getAnalyses(full_objects=True))
['Au']
And the Analysis Request remains in the same state
>>> api.get_workflow_status_of(ar)
'sample_due'
Remove Analyses from an Analysis Request with submitted and verified results
Create a new Analysis Request and receive:
>>> ar = new_ar([Cu, Fe, Au])
>>> transitioned = do_action_for(ar, "receive")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(ar)
'sample_received'
Submit results for Fe:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> analysis_fe = filter(lambda an: an.getKeyword() == "Fe", analyses)[0]
>>> analysis_fe.setResult(12)
>>> transitioned = do_action_for(analysis_fe, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_fe)
'to_be_verified'
The Analysis Request status is still sample_received:
>>> api.get_workflow_status_of(ar)
'sample_received'
Submit results for Au:
>>> analysis_au = filter(lambda an: an.getKeyword() == "Au", analyses)[0]
>>> analysis_au.setResult(14)
>>> transitioned = do_action_for(analysis_au, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_au)
'to_be_verified'
And verify Au:
>>> transitioned = do_action_for(analysis_au, "verify")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_au)
'verified'
Again, the Analysis Request status is still sample_received:
>>> api.get_workflow_status_of(ar)
'sample_received'
But if we remove the analysis without result (Cu), the Analysis Request
transitions to “to_be_verified” because follows Fe:
>>> ar.setAnalyses([Fe, Au])
>>> api.get_workflow_status_of(ar)
'to_be_verified'
Analyses which are in the state to_be_verified can not be removed.
Therefore, if we try to remove the analysis Fe (in to_be_verified state),
the Analysis Request will stay in to_be_verified and the Analysis will still
be assigned:
>>> analysis_fe in ar.objectValues()
True
>>> analysis_au in ar.objectValues()
True
>>> api.get_workflow_status_of(ar)
'to_be_verified'
The only way to remove the Fe analysis is to retract it first:
>>> transitioned = do_action_for(analysis_fe, "retract")
>>> api.get_workflow_status_of(analysis_fe)
'retracted'
And if we remove analysis Fe, the Analysis Request will follow Au analysis
(that is verified):
>>> ar.setAnalyses([Au])
>>> api.get_workflow_status_of(ar)
'verified'
Remove Analyses from an Analysis Request with all remaining tests verified
Create a new Analysis Request and receive:
>>> ar = new_ar([Cu, Fe, Au])
>>> transitioned = do_action_for(ar, "receive")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(ar)
'sample_received'
Submit and verify results for Fe:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> analysis_fe = filter(lambda an: an.getKeyword() == "Fe", analyses)[0]
>>> analysis_fe.setResult(12)
>>> transitioned = do_action_for(analysis_fe, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_fe)
'to_be_verified'
>>> transitioned = do_action_for(analysis_fe, "verify")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_fe)
'verified'
- Submit and verify results for Au:
>>> analysis_au = filter(lambda an: an.getKeyword() == "Au", analyses)[0]
>>> analysis_au.setResult(14)
>>> transitioned = do_action_for(analysis_au, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_au)
'to_be_verified'
>>> transitioned = do_action_for(analysis_au, "verify")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_au)
'verified'
The Analysis Request status is still sample_received:
>>> api.get_workflow_status_of(ar)
'sample_received'
But if we remove the analysis without result (Cu), the Analysis Request
transitions to “verfied” because follows Fe and Au:
>>> ar.setAnalyses([Fe, Au])
>>> api.get_workflow_status_of(ar)
'verified'
Rolemap
Bika LIMS defines several roles for the lab context
How to run this test
Please execute the following command in the buildout directory:
./bin/test test_textual_doctests -t Rolemap
Test Setup
Needed Imports:
>>> from bika.lims import api
Test Variables:
>>> portal = api.get_portal()
>>> acl_users = api.get_tool("acl_users")
Check Bika LIMS Roles
Ensure the “Analyst” role exists:
>>> role = "Analyst"
>>> role in acl_users.validRoles()
True
Ensure the “Client” role exists:
>>> role = "Client"
>>> role in acl_users.validRoles()
True
Ensure the “LabClerk” role exists:
>>> role = "LabClerk"
>>> role in acl_users.validRoles()
True
Ensure the “LabManager” role exists:
>>> role = "LabManager"
>>> role in acl_users.validRoles()
True
Ensure the “Member” role exists:
>>> role = "Member"
>>> role in acl_users.validRoles()
True
Ensure the “Preserver” role exists:
>>> role = "Preserver"
>>> role in acl_users.validRoles()
True
Ensure the “Publisher” role exists:
>>> role = "Publisher"
>>> role in acl_users.validRoles()
True
Ensure the “RegulatoryInspector” role exists:
>>> role = "RegulatoryInspector"
>>> role in acl_users.validRoles()
True
Ensure the “Reviewer” role exists:
>>> role = "Reviewer"
>>> role in acl_users.validRoles()
True
Ensure the “Sampler” role exists:
>>> role = "Sampler"
>>> role in acl_users.validRoles()
True
Ensure the “SamplingCoordinator” role exists:
>>> role = "SamplingCoordinator"
>>> role in acl_users.validRoles()
True
Ensure the “Verifier” role exists:
>>> role = "Verifier"
>>> role in acl_users.validRoles()
True
Secondary Analysis Request
Running this test from the buildout directory:
bin/test test_textual_doctests -t SecondaryAnalysisRequest
Test Setup
Needed Imports:
>>> from DateTime import DateTime
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> from bika.lims import api
>>> from bika.lims.interfaces import IAnalysisRequestSecondary
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.utils.analysisrequest import create_partition
>>> from bika.lims.workflow import doActionFor as do_action_for
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
Some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ["LabManager",])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(setup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Create Secondary Analysis Request
To create a Secondary Analysis Request we need first a primary (or source
Analysis Request) to which the secondary analysis request will refer to:
>>> values = {
... "Client": client.UID(),
... "Contact": contact.UID(),
... "SamplingDate": DateTime(),
... "DateSampled": DateTime(),
... "SampleType": sampletype.UID() }
>>> service_uids = map(api.get_uid, [Cu, Fe, Au])
>>> primary = create_analysisrequest(client, request, values, service_uids)
>>> primary
<AnalysisRequest at /plone/clients/client-1/W-0001>
Receive the primary analysis request:
>>> transitioned = do_action_for(primary, "receive")
>>> api.get_workflow_status_of(primary)
'sample_received'
Create the Secondary Analysis Request:
>>> values = {
... "Client": client.UID(),
... "Contact": contact.UID(),
... "SampleType": sampletype.UID(),
... "PrimaryAnalysisRequest": primary }
>>> service_uids = map(api.get_uid, [Cu, Fe, Au])
>>> secondary = create_analysisrequest(client, request, values, service_uids)
>>> secondary
<AnalysisRequest at /plone/clients/client-1/W-0001-S01>
>>> secondary.getPrimaryAnalysisRequest()
<AnalysisRequest at /plone/clients/client-1/W-0001>
The secondary AnalysisRequest also provides IAnalysisRequestSecondary:
>>> IAnalysisRequestSecondary.providedBy(secondary)
True
Dates match with those from the primary Analysis Request:
>>> secondary.getDateSampled() == primary.getDateSampled()
True
>>> secondary.getSamplingDate() == primary.getSamplingDate()
True
The secondary sample is automatically transitioned to sample_received:
>>> api.get_workflow_status_of(secondary)
'sample_received'
The SampleReceived date matches with the primary’s:
>>> secondary.getDateReceived() == primary.getDateReceived()
True
Analyses have been also initialized automatically:
>>> analyses = secondary.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['unassigned', 'unassigned', 'unassigned']
If I create another secondary sample using same AR as the primary:
>>> values = {
... "Client": client.UID(),
... "Contact": contact.UID(),
... "SampleType": sampletype.UID(),
... "PrimaryAnalysisRequest": primary }
>>> service_uids = map(api.get_uid, [Cu, Fe, Au])
>>> secondary = create_analysisrequest(client, request, values, service_uids)
The ID suffix of the new secondary sample increases in one unit:
>>> secondary.getId()
'W-0001-S02'
If I create a secondary sample from another secondary AR as the primary:
>>> values = {
... "Client": client.UID(),
... "Contact": contact.UID(),
... "SampleType": sampletype.UID(),
... "PrimaryAnalysisRequest": secondary }
>>> service_uids = map(api.get_uid, [Cu, Fe, Au])
>>> third = create_analysisrequest(client, request, values, service_uids)
The ID suffix is extended accordingly:
>>> third.getId()
'W-0001-S02-S01'
And the associated primary AR is the secondary sample we created earlier:
>>> third.getPrimaryAnalysisRequest()
<AnalysisRequest at /plone/clients/client-1/W-0001-S02>
And of course, keeps same date values:
>>> third.getDateSampled() == secondary.getDateSampled()
True
>>> third.getSamplingDate() == secondary.getSamplingDate()
True
>>> third.getDateReceived() == secondary.getDateReceived()
True
If we change the dates from the root Primary:
>>> primary.setSamplingDate(DateTime() + 5)
>>> primary.setDateSampled(DateTime() + 10)
>>> primary.setDateReceived(DateTime() + 15)
Dates for secondaries are updated in accordance:
>>> third.getSamplingDate() == secondary.getSamplingDate() == primary.getSamplingDate()
True
>>> third.getDateSampled() == secondary.getDateSampled() == primary.getDateSampled()
True
>>> third.getDateReceived() == secondary.getDateReceived() == primary.getDateReceived()
True
Secondary Analysis Requests and partitions
When partitions are created from a secondary Analysis Request, the partitions
themselves are not considered secondaries from the primary AR, but partitions
of a Secondary Analysis Request.
Create a secondary Analysis Request:
>>> values = {
... "Client": client.UID(),
... "Contact": contact.UID(),
... "SampleType": sampletype.UID(),
... "PrimaryAnalysisRequest": primary }
>>> service_uids = map(api.get_uid, [Cu, Fe, Au])
>>> secondary = create_analysisrequest(client, request, values, service_uids)
>>> secondary
<AnalysisRequest at /plone/clients/client-1/W-0001-S03>
Create a single partition from the secondary Analysis Request:
>>> analyses = secondary.getAnalyses()
>>> analyses_1 = analyses[0:1]
>>> analyses_2 = analyses[1:]
>>> partition = create_partition(secondary, request, analyses_1)
>>> partition
<AnalysisRequest at /plone/clients/client-1/W-0001-S03-P01>
>>> partition.isPartition()
True
>>> partition.getParentAnalysisRequest()
<AnalysisRequest at /plone/clients/client-1/W-0001-S03>
Partition does not provide IAnalysisRequestSecondary:
>>> IAnalysisRequestSecondary.providedBy(partition)
False
And does not keep the original Primary Analysis Request:
>>> partition.getPrimaryAnalysisRequest() is None
True
If we create another partition, the generated ID is increased in one unit:
>>> partition = create_partition(secondary, request, analyses_2)
>>> partition
<AnalysisRequest at /plone/clients/client-1/W-0001-S03-P02>
We can even create a secondary Analysis Request from a partition as the source:
>>> values = {
... "Client": client.UID(),
... "Contact": contact.UID(),
... "SampleType": sampletype.UID(),
... "PrimaryAnalysisRequest": partition }
>>> service_uids = map(api.get_uid, [Cu, Fe, Au])
>>> secondary = create_analysisrequest(client, request, values, service_uids)
>>> secondary
<AnalysisRequest at /plone/clients/client-1/W-0001-S03-P02-S01>
But note this new secondary is not considered a partition of a partition:
>>> secondary.isPartition()
False
But keeps the partition as the primary:
>>> secondary.getPrimaryAnalysisRequest()
<AnalysisRequest at /plone/clients/client-1/W-0001-S03-P02>
We can also create new partitions from this weird secondary:
>>> partition = create_partition(secondary, request, secondary.getAnalyses())
>>> partition
<AnalysisRequest at /plone/clients/client-1/W-0001-S03-P02-S01-P01>
Infinite recursion when fetching dependencies from Service
This test checks that no infinite recursion error arises when fetching the
dependencies of a Service (via Calculation) that itself contains a keyword in
a calculation from another service bound to a calculation that refers to the
first one as well.
Running this test from the buildout directory:
bin/test test_textual_doctests -t ServicesCalculationRecursion.rst
Test Setup
Needed imports:
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from bika.lims import api
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = api.get_setup()
Create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
Creation of Service with a Calculation that refers to itself
The most common case is when the Calculation is assigned to the same Analysis
that is referred in the Calculation’s formula:
>>> Ca = api.create(setup.bika_analysisservices, "AnalysisService", title="Calcium", Keyword="Ca", Price="20", Category=category.UID())
>>> Mg = api.create(setup.bika_analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg", Price="20", Category=category.UID())
>>> calc = api.create(setup.bika_calculations, "Calculation", title="Total Hardness")
>>> calc.setFormula("[Ca] + [Mg]")
>>> calc.getFormula()
'[Ca] + [Mg]'
>>> Ca.setCalculation(calc)
>>> Ca.getCalculation()
<Calculation at /plone/bika_setup/bika_calculations/calculation-1>
>>> deps = Ca.getServiceDependencies()
>>> sorted(map(lambda d: d.getKeyword(), deps))
['Ca', 'Mg']
>>> deps = calc.getCalculationDependencies()
>>> len(deps.keys())
2
>>> deps = calc.getCalculationDependencies(flat=True)
>>> sorted(map(lambda d: d.getKeyword(), deps))
['Ca', 'Mg']
The other case is when the initial Service is referred indirectly, through a
calculation a dependency is bound to:
>>> calc_mg = api.create(setup.bika_calculations, "Calculation", title="Test")
>>> calc_mg.setFormula("[Ca] + [Ca]")
>>> calc_mg.getFormula()
'[Ca] + [Ca]'
>>> Mg.setCalculation(calc_mg)
>>> Mg.getCalculation()
<Calculation at /plone/bika_setup/bika_calculations/calculation-2>
>>> deps = Mg.getServiceDependencies()
>>> sorted(map(lambda d: d.getKeyword(), deps))
['Ca', 'Mg']
>>> deps = calc_mg.getCalculationDependencies()
>>> len(deps.keys())
2
>>> deps = calc_mg.getCalculationDependencies(flat=True)
>>> sorted(map(lambda d: d.getKeyword(), deps))
['Ca', 'Mg']
>>> deps = Ca.getServiceDependencies()
>>> sorted(map(lambda d: d.getKeyword(), deps))
['Ca', 'Mg']
>>> deps = calc.getCalculationDependencies()
>>> len(deps.keys())
2
>>> deps = calc.getCalculationDependencies(flat=True)
>>> sorted(map(lambda d: d.getKeyword(), deps))
['Ca', 'Mg']
Show or Hide Prices
There’s a setting in BikaSetup called ‘Include and display pricing information’.
If this setting is disabled, then no mention of pricing or invoicing should
appear in the system. I still allowed the fields for Price to appear in
AnalysisService edit form, so that they may be modified while still remaining
hidden elsewhere.
Running this test from the buildout directory:
bin/test -t ShowPrices
Test Setup
Needed Imports:
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from DateTime import DateTime
>>> from plone import api as ploneapi
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from time import sleep
>>> import transaction
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def enableShowPrices():
... self.portal.bika_setup.setShowPrices(True)
... transaction.commit()
>>> def disableShowPrices():
... self.portal.bika_setup.setShowPrices(False)
... transaction.commit()
Variables:
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> request = self.request
>>> portal = self.portal
>>> bs = portal.bika_setup
>>> laboratory = bs.laboratory
>>> portal_url = portal.absolute_url()
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Now we need to create some basic content for our tests:
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(portal.bika_setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(portal.bika_setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(portal.bika_setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(portal.bika_setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(portal.bika_setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="409.17", Category=category.UID(), Accredited=True)
>>> Fe = api.create(portal.bika_setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="208.20", Category=category.UID())
>>> profile = api.create(portal.bika_setup.bika_analysisprofiles, "AnalysisProfile", title="Profile", Service=[Fe.UID(), Cu.UID()])
>>> template = api.create(portal.bika_setup.bika_artemplates, "ARTemplate", title="Template", AnalysisProfile=[profile.UID()])
Enable accreditation for the lab
>>> laboratory.setLaboratoryAccredited(True)
And start a browser:
>>> transaction.commit()
>>> browser = self.getBrowser()
Analysis Request View
Test show/hide prices when viewing an AR. First, create an AR:
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
>>> service_uids = [Cu.UID(), Fe.UID()]
>>> ar = create_analysisrequest(client, request, values, service_uids)
- ~~
TODO: Fails because barceloeanata theme loaded?!
With ShowPrices enabled, the Invoice tab should be rendered:
enableShowPrices()
browser.open(ar.absolute_url())
True if ‘contentview-invoice’ in browser.contents else “Invoice Tab is not visible, but ShowPrices is True.”
True
And when ShowPrices is off, the Invoice tab should not be present at all:
disableShowPrices()
browser.open(ar.absolute_url())
True if ‘contentview-invoice’ not in browser.contents else “Invoice Tab is visible, but ShowPrices is False.”
True
Specification and Results Ranges with Samples and analyses
Specification is an object containing a list of results ranges, each one refers
to the min/max/min_warn/max_warn values to apply for a given analysis service.
User can assign a Specification to a Sample, so the results of it’s Analyses
will be checked against the results ranges provided by the Specification.
Running this test from the buildout directory:
bin/test test_textual_doctests -t SpecificationAndResultsRanges.rst
Test Setup
Needed imports:
>>> import transaction
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.utils.analysisrequest import create_partition
>>> from bika.lims.workflow import doActionFor as do_action_for
Functional Helpers:
>>> def new_sample(services, specification=None, results_ranges=None):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': DateTime().strftime("%Y-%m-%d"),
... 'SampleType': sampletype.UID(),
... 'Analyses': map(api.get_uid, services),
... 'Specification': specification or None }
...
... ar = create_analysisrequest(client, request, values, results_ranges=results_ranges)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def get_analysis_from(sample, service):
... service_uid = api.get_uid(service)
... for analysis in sample.getAnalyses(full_objects=True):
... if analysis.getServiceUID() == service_uid:
... return analysis
... return None
>>> def get_results_range_from(obj, service):
... field = obj.getField("ResultsRange")
... return field.get(obj, search_by=api.get_uid(service))
>>> def set_results_range_for(obj, results_range):
... rrs = obj.getResultsRange()
... uid = results_range["uid"]
... rrs = filter(lambda rr: rr["uid"] != uid, rrs)
... rrs.append(results_range)
... obj.setResultsRange(rrs)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = api.get_setup()
Create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Au = api.create(setup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> Cu = api.create(setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID())
>>> Fe = api.create(setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Mg = api.create(setup.bika_analysisservices, "AnalysisService", title="Magnesium", Keyword="Mg", Price="20", Category=category.UID())
>>> Zn = api.create(setup.bika_analysisservices, "AnalysisService", title="Zinc", Keyword="Zn", Price="10", Category=category.UID())
Create an Analysis Specification for Water:
>>> sampletype_uid = api.get_uid(sampletype)
>>> rr1 = {"uid": api.get_uid(Au), "min": 10, "max": 20, "warn_min": 5, "warn_max": 25}
>>> rr2 = {"uid": api.get_uid(Cu), "min": 20, "max": 30, "warn_min": 15, "warn_max": 35}
>>> rr3 = {"uid": api.get_uid(Fe), "min": 30, "max": 40, "warn_min": 25, "warn_max": 45}
>>> rr4 = {"uid": api.get_uid(Mg), "min": 40, "max": 50, "warn_min": 35, "warn_max": 55}
>>> rr5 = {"uid": api.get_uid(Zn), "min": 50, "max": 60, "warn_min": 45, "warn_max": 65}
>>> rr = [rr1, rr2, rr3, rr4, rr5]
>>> specification = api.create(setup.bika_analysisspecs, "AnalysisSpec", title="Lab Water Spec", SampleType=sampletype_uid, ResultsRange=rr)
Creation of a Sample with Specification
A given Specification can be assigned to the Sample during the creation process.
The results ranges of the mentioned Specification will be stored in ResultsRange
field from the Sample and the analyses will acquire those results ranges
individually.
Specification from Sample is history-aware, so even if the Specification object
is changed after its assignment to the Sample, the Results Ranges from either
the Sample and its Analyses will remain untouched.
Create a Sample and receive:
>>> services = [Au, Cu, Fe, Mg]
>>> sample = new_sample(services, specification=specification)
The sample has the specification assigned:
>>> sample.getSpecification()
<AnalysisSpec at /plone/bika_setup/bika_analysisspecs/analysisspec-1>
And its results ranges match with the sample’s ResultsRange field value:
>>> specification.getResultsRange() == sample.getResultsRange()
True
And the analyses the sample contains have the results ranges properly set:
>>> au = get_analysis_from(sample, Au)
>>> au.getResultsRange() == get_results_range_from(specification, Au)
True
>>> cu = get_analysis_from(sample, Cu)
>>> cu.getResultsRange() == get_results_range_from(specification, Cu)
True
>>> fe = get_analysis_from(sample, Fe)
>>> fe.getResultsRange() == get_results_range_from(specification, Fe)
True
>>> mg = get_analysis_from(sample, Mg)
>>> mg.getResultsRange() == get_results_range_from(specification, Mg)
True
We can change a result range by using properties:
>>> rr_au = au.getResultsRange()
>>> rr_au.min = 11
>>> rr_au.max = 21
>>> (rr_au.min, rr_au.max)
(11, 21)
Or using it as a dict:
>>> rr_au["min"] = 15
>>> rr_au["max"] = 25
>>> (rr_au["min"], rr_au["max"])
(15, 25)
If we change this results range in the Specification object, this won’t take any
effect to neither the Sample nor analyses:
>>> set_results_range_for(specification, rr_au)
>>> specification.getResultsRange() == sample.getResultsRange()
False
>>> au.getResultsRange() == get_results_range_from(specification, Au)
False
>>> get_results_range_from(sample, Au) == au.getResultsRange()
True
>>> rr_sample_au = au.getResultsRange()
>>> (rr_sample_au.min, rr_sample_au.max)
(10, 20)
If we re-apply the Specification, nothing will change though, because its uid
is still the same:
>>> sample.setSpecification(specification)
>>> specification.getResultsRange() == sample.getResultsRange()
False
But the ResultsRange value from Sample is updated accordingly if we set the
specification to None first:
>>> sample.setSpecification(None)
>>> sample.setSpecification(specification)
>>> specification.getResultsRange() == sample.getResultsRange()
True
As well as the analyses the sample contains:
>>> au.getResultsRange() == get_results_range_from(specification, Au)
True
>>> rr_sample_au = au.getResultsRange()
>>> (rr_sample_au.min, rr_sample_au.max)
(15, 25)
Removal of Analyses from a Sample with Specifications
User can remove analyses from the Sample. If the user removes one of the
analyses, the Specification assigned to the Sample will remain intact, as well
as Sample’s Results Range:
>>> sample.setAnalyses([Au, Cu, Fe])
>>> analyses = sample.objectValues()
>>> sorted(analyses, key=lambda an: an.getKeyword())
[<Analysis at /plone/clients/client-1/W-0001/Au>, <Analysis at /plone/clients/client-1/W-0001/Cu>, <Analysis at /plone/clients/client-1/W-0001/Fe>]
>>> sample.getSpecification()
<AnalysisSpec at /plone/bika_setup/bika_analysisspecs/analysisspec-1>
>>> specification.getResultsRange() == sample.getResultsRange()
True
Addition of Analyses to a Sample with Specifications
User can add new analyses to the Sample as well. If the Sample has an
Specification set and the specification had a results range registered for
such analysis, the result range for the new analysis will be set automatically:
>>> sample.setAnalyses([Au, Cu, Fe, Zn])
>>> sample.getSpecification()
<AnalysisSpec at /plone/bika_setup/bika_analysisspecs/analysisspec-1>
>>> zn = get_analysis_from(sample, Zn)
>>> zn.getResultsRange() == get_results_range_from(specification, Zn)
True
If we reset an Analysis with it’s own ResultsRange, different from the range
defined by the Specification, the system does not clear the Specification:
>>> rr_zn = zn.getResultsRange()
>>> rr_zn.min = 55
>>> sample.setAnalyses([Au, Cu, Fe, Zn], specs=[rr_zn])
>>> sample.getSpecification()
<AnalysisSpec at /plone/bika_setup/bika_analysisspecs/analysisspec-1>
and Sample’s ResultsRange is kept unchanged:
>>> sample_rr = sample.getResultsRange()
>>> len(sample_rr)
5
with result range for Zn unchanged:
>>> sample_rr_zn = sample.getResultsRange(search_by=api.get_uid(Zn))
>>> sample_rr_zn.min
50
But analysis’ result range has indeed changed:
>>> zn.getResultsRange().min
55
If we re-apply the Specification, the result range for Zn, as well as for the
Sample, are reestablished:
>>> sample.setSpecification(None)
>>> sample.setSpecification(specification)
>>> specification.getResultsRange() == sample.getResultsRange()
True
>>> zn.getResultsRange() == get_results_range_from(specification, Zn)
True
>>> zn.getResultsRange().min
50
Sample with Specifications and Partitions
When a sample has partitions, the Specification set to the root Sample is
populated to all its descendants:
>>> partition = create_partition(sample, request, [zn])
>>> partition
<AnalysisRequest at /plone/clients/client-1/W-0001-P01>
>>> zn = get_analysis_from(partition, Zn)
>>> zn
<Analysis at /plone/clients/client-1/W-0001-P01/Zn>
The partition keeps the Specification and ResultsRange by its own:
>>> partition.getSpecification()
<AnalysisSpec at /plone/bika_setup/bika_analysisspecs/analysisspec-1>
>>> partition.getResultsRange() == specification.getResultsRange()
True
If we reset an Analysis with it’s own ResultsRange, different from the range
defined by the Specification, the system does not clear the Specification,
neither from the root sample nor the partition:
>>> rr_zn = zn.getResultsRange()
>>> rr_zn.min = 56
>>> partition.setAnalyses([Zn], specs=[rr_zn])
>>> sample.getSpecification()
<AnalysisSpec at /plone/bika_setup/bika_analysisspecs/analysisspec-1>
>>> partition.getSpecification()
<AnalysisSpec at /plone/bika_setup/bika_analysisspecs/analysisspec-1>
And Results Range from both Sample and partition are kept untouched:
>>> sample.getSpecification()
<AnalysisSpec at /plone/bika_setup/bika_analysisspecs/analysisspec-1>
>>> sample.getResultsRange() == specification.getResultsRange()
True
>>> partition.getSpecification()
<AnalysisSpec at /plone/bika_setup/bika_analysisspecs/analysisspec-1>
>>> partition.getResultsRange() == specification.getResultsRange()
True
Sysmex xt i1800 import interface
Running this test from the buildout directory:
bin/test test_textual_doctests -t SysmexXTi1800ImportInterface
Test Setup
Needed imports:
>>> import os
>>> import transaction
>>> from Products.CMFCore.utils import getToolByName
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from DateTime import DateTime
>>> import codecs
>>> from senaite.core.exportimport import instruments
>>> from senaite.core.exportimport.instruments.sysmex.xt import SysmexXTImporter
>>> from senaite.core.exportimport.instruments.sysmex.xt.i1800 import TX1800iParser
>>> from senaite.core.exportimport.auto_import_results import UploadFileWrapper
Functional helpers:
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
Variables:
>>> date_now = timestamp()
>>> portal = self.portal
>>> request = self.request
>>> bika_setup = portal.bika_setup
>>> bika_instruments = bika_setup.bika_instruments
>>> bika_sampletypes = bika_setup.bika_sampletypes
>>> bika_samplepoints = bika_setup.bika_samplepoints
>>> bika_analysiscategories = bika_setup.bika_analysiscategories
>>> bika_analysisservices = bika_setup.bika_analysisservices
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager:
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Availability of instrument interface
Check that the instrument interface is available:
>>> exims = []
>>> for exim_id in instruments.__all__:
... exims.append(exim_id)
>>> 'sysmex.xt.i1800' in exims
True
Assigning the Import Interface to an Instrument
Create an Instrument and assign to it the tested Import Interface:
>>> instrument = api.create(bika_instruments, "Instrument", title="Instrument-1")
>>> instrument
<Instrument at /plone/bika_setup/bika_instruments/instrument-1>
>>> instrument.setImportDataInterface(['sysmex.xt.i1800'])
>>> instrument.getImportDataInterface()
['sysmex.xt.i1800']
Import test
Required steps: Create and receive Analysis Request for import test
An AnalysisRequest can only be created inside a Client, and it also requires a Contact and
a SampleType:
>>> clients = self.portal.clients
>>> client = api.create(clients, "Client", Name="NARALABS", ClientID="NLABS")
>>> client
<Client at /plone/clients/client-1>
>>> contact = api.create(client, "Contact", Firstname="Juan", Surname="Gallostra")
>>> contact
<Contact at /plone/clients/client-1/contact-1>
>>> sampletype = api.create(bika_sampletypes, "SampleType", Prefix="H2O", MinimumVolume="100 ml")
>>> sampletype
<SampleType at /plone/bika_setup/bika_sampletypes/sampletype-1>
Create an AnalysisCategory (which categorizes different AnalysisServices), and add to it some
of the AnalysisServices that are found in the results file:
>>> analysiscategory = api.create(bika_analysiscategories, "AnalysisCategory", title="Water")
>>> analysiscategory
<AnalysisCategory at /plone/bika_setup/bika_analysiscategories/analysiscategory-1>
>>> analysisservice_1 = api.create(bika_analysisservices,
... "AnalysisService",
... title="WBC",
... ShortTitle="wbc",
... Category=analysiscategory,
... Keyword="WBC")
>>> analysisservice_1
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-1>
>>> analysisservice_2 = api.create(bika_analysisservices,
... "AnalysisService",
... title="RBC",
... ShortTitle="rbc",
... Category=analysiscategory,
... Keyword="RBC")
>>> analysisservice_2
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-2>
>>> analysisservice_3 = api.create(bika_analysisservices,
... "AnalysisService",
... title="HGB",
... ShortTitle="hgb",
... Category=analysiscategory,
... Keyword="HGB")
>>> analysisservice_3
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-3>
>>> analysisservice_4 = api.create(bika_analysisservices,
... "AnalysisService",
... title="HCT",
... ShortTitle="hct",
... Category=analysiscategory,
... Keyword="HCT")
>>> analysisservice_4
<AnalysisService at /plone/bika_setup/bika_analysisservices/analysisservice-4>
>>> analysisservices = [analysisservice_1, analysisservice_2, analysisservice_3, analysisservice_4]
Create an AnalysisRequest with this AnalysisService and receive it:
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SamplingDate': date_now,
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()
... }
>>> service_uids = [analysisservice.UID() for analysisservice in analysisservices]
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> ar
<AnalysisRequest at /plone/clients/client-1/H2O-0001>
>>> ar.getReceivedBy()
''
>>> wf = getToolByName(ar, 'portal_workflow')
>>> wf.doActionFor(ar, 'receive')
>>> ar.getReceivedBy()
'test_user_1_'
Import test
Load results test file and import the results:
>>> dir_path = os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', 'files'))
>>> temp_file = codecs.open(dir_path + '/2012-05-09_11-06-14-425_CBDB6A.txt',
... encoding='utf-8-sig')
>>> test_file = UploadFileWrapper(temp_file)
>>> tx1800i_parser = TX1800iParser(test_file)
>>> importer = SysmexXTImporter(parser=tx1800i_parser,
... context=portal,
... allowed_ar_states=['sample_received', 'to_be_verified'],
... allowed_analysis_states=None,
... override=[True, True])
>>> importer.process()
Check from the importer logs that the file from where the results have been imported is indeed
the specified file:
>>> '2012-05-09_11-06-14-425_CBDB6A.txt' in importer.logs[0]
True
Check the rest of the importer logs to verify that the values were correctly imported:
>>> importer.logs[-1]
'Import finished successfully: 1 Samples and 4 results updated'
And finally check if indeed the analysis has the imported results:
>>> analyses = ar.getAnalyses()
>>> an = [analysis.getObject() for analysis in analyses if analysis.Title=='WBC'][0]
>>> an.getResult()
'6.01'
>>> an = [analysis.getObject() for analysis in analyses if analysis.Title=='RBC'][0]
>>> an.getResult()
'5.02'
>>> an = [analysis.getObject() for analysis in analyses if analysis.Title=='HGB'][0]
>>> an.getResult()
'13.2'
>>> an = [analysis.getObject() for analysis in analyses if analysis.Title=='HCT'][0]
>>> an.getResult()
'40.0'
Sysmex xt i4000 import interface
Running this test from the buildout directory:
bin/test test_textual_doctests -t SysmexXTi4000ImportInterface
Notes
Since the Sysmex xt i4000 uses the same parser and importer than the Sysmex xt i1800 this test only
tests that the import interface of the i4000 can be assigned to an instrument. The functional tests
for the parser and importer can be found in the tests for the Sysmex xt i1800.
Test Setup
Needed imports:
>>> import os
>>> import transaction
>>> from Products.CMFCore.utils import getToolByName
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from DateTime import DateTime
>>> import codecs
>>> from senaite.core.exportimport import instruments
>>> from senaite.core.exportimport.instruments.sysmex.xt import SysmexXTImporter
>>> from senaite.core.exportimport.instruments.sysmex.xt.i1800 import TX1800iParser
Functional helpers:
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
Variables:
>>> date_now = timestamp()
>>> portal = self.portal
>>> request = self.request
>>> bika_setup = portal.bika_setup
>>> bika_instruments = bika_setup.bika_instruments
>>> bika_sampletypes = bika_setup.bika_sampletypes
>>> bika_samplepoints = bika_setup.bika_samplepoints
>>> bika_analysiscategories = bika_setup.bika_analysiscategories
>>> bika_analysisservices = bika_setup.bika_analysisservices
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager:
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
Availability of instrument interface
Check that the instrument interface is available:
>>> exims = []
>>> for exim_id in instruments.__all__:
... exims.append(exim_id)
>>> 'sysmex.xt.i4000' in exims
True
Assigning the Import Interface to an Instrument
Create an Instrument and assign to it the tested Import Interface:
>>> instrument = api.create(bika_instruments, "Instrument", title="Instrument-1")
>>> instrument
<Instrument at /plone/bika_setup/bika_instruments/instrument-1>
>>> instrument.setImportDataInterface(['sysmex.xt.i4000'])
>>> instrument.getImportDataInterface()
['sysmex.xt.i4000']
UIDReferenceField
UIDReferenceField is a drop-in replacement for Plone’s ReferenceField which
uses a StringField to store a UID or a list of UIDs.
Running this test from the buildout directory:
bin/test test_textual_doctests -t UIDReferenceField
Needed Imports:
>>> import re
>>> from bika.lims import api
>>> from bika.lims.browser.fields.uidreferencefield import get_backreferences
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bika_calculations = portal.bika_setup.bika_setup.bika_calculations
>>> bika_analysisservices = portal.bika_setup.bika_setup.bika_analysisservices
Test user:
We need certain permissions to create and access objects used in this test,
so here we will assume the role of Lab Manager.
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import setRoles
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
I’ll test using the relationship between Calculations and AnalysisServices.
First I’ll create some AnalysisServices and Calculations:
>>> as1 = api.create(bika_analysisservices, "AnalysisService", title="AS 1")
>>> as1.setKeyword("as1")
>>> as1.reindexObject()
>>> as2 = api.create(bika_analysisservices, "AnalysisService", title="AS 2")
>>> as2.setKeyword("as2")
>>> as2.reindexObject()
>>> as3 = api.create(bika_analysisservices, "AnalysisService", title="AS 3")
>>> as3.setKeyword("as3")
>>> as3.reindexObject()
>>> c1 = api.create(bika_calculations, "Calculation", title="C 1")
>>> c2 = api.create(bika_calculations, "Calculation", title="C 2")
Put some AS Keywords into the Formula field of the calculations, which will
cause their DependentServices field (a UIDReferenceField) to be populated.
>>> c1.setFormula("[as1] + [as2] + [as3]")
>>> c2.setFormula("[as1] + [as2]")
c1 now depends on three services:
>>> deps = [s.Title() for s in c1.getDependentServices()]
>>> deps.sort()
>>> deps
['AS 1', 'AS 2', 'AS 3']
c2 now depends on two services:
>>> deps = [s.Title() for s in c2.getDependentServices()]
>>> deps.sort()
>>> deps
['AS 1', 'AS 2']
Backreferences are stored on each object which is a target of a
UIDReferenceField. This allows a service to ask, “which calculations
include me in their DependentServices?”:
>>> get_backreferences(as1, 'CalculationDependentServices')
['...', '...']
It also allows to find out which services have selected a particular
calculation as their primary Calculation field’s value:
>>> as3.setCalculation(c2)
>>> get_backreferences(c2, 'AnalysisServiceCalculation')
['...']
The value will always be a list of UIDs, unless as_brains is True:
>>> get_backreferences(c2, 'AnalysisServiceCalculation', as_brains=1)
[<Products.ZCatalog.Catalog.mybrains object at ...>]
If no relationship is specified when calling get_backreferences, then a dict
is returned (by reference) containing UIDs of all references for all relations.
Modifying this dict in-place, will cause the backreferences to be changed!
>>> get_backreferences(as1)
{'CalculationDependentServices': ['...', '...']}
When requesting the entire set of all backreferences only UIDs may be returned,
and it is an error to request brains:
>>> get_backreferences(as1, as_brains=True)
Traceback (most recent call last):
...
AssertionError: You cannot use as_brains with no relationship
Versioning
NOTE: Versioning is outdated!
This code will be removed as soon as we removed the HistoryAwareReferenceField reference
between Calculation and Analysis.
Each edit & save process creates a new version, which is triggered by the
ObjectEditedEvent from Products.CMFEditions package.
Test Setup
>>> from Acquisition import aq_base
>>> from plone import api as ploneapi
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> portal = self.portal
>>> setRoles(portal, TEST_USER_ID, ['LabManager', 'Manager', 'Owner'])
>>> def is_versionable(obj):
... pr = ploneapi.portal.get_tool("portal_repository")
... return pr.supportsPolicy(obj, 'at_edit_autoversion') and pr.isVersionable(obj)
>>> def get_version(obj):
... if not is_versionable(obj):
... return None
... return getattr(aq_base(obj), "version_id", None)
Calculations
Create a calculation for testing:
>>> calculations = self.portal.bika_setup.bika_calculations
>>> _ = calculations.invokeFactory("Calculation", id="tempId", title="Test Calculation 1")
>>> calculation = calculations.get(_)
Process Form to notify Bika about the new content type:
>>> calculation.processForm()
Calcuations should be versionable:
>>> is_versionable(calculation)
True
>>> get_version(calculation)
0
Create a new version – for testing, it is sufficient to call the process_form
method, as this is also called after the content has been edited:
>>> calculation.processForm()
>>> get_version(calculation)
1
Analysis assign guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisAssign
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... return ar
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Assign transition and guard basic constraints
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> transitioned = do_action_for(ar, "receive")
The status of the analyses is unassigned:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['unassigned', 'unassigned', 'unassigned']
Create a Worksheet and add the analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
>>> sorted((map(lambda an: an.getKeyword(), worksheet.getAnalyses())))
['Au', 'Cu', 'Fe']
Analyses have been transitioned to assigned:
>>> map(api.get_workflow_status_of, analyses)
['assigned', 'assigned', 'assigned']
And all them are associated to the worksheet:
>>> ws_uid = api.get_uid(worksheet)
>>> filter(lambda an: an.getWorksheetUID() != ws_uid, analyses)
[]
Analyses do not have an Analyst assigned, though:
>>> filter(lambda an: an.getAnalyst(), analyses)
[]
If I assign a user to the Worksheet, same user will be assigned to analyses:
>>> worksheet.setAnalyst(TEST_USER_ID)
>>> worksheet.getAnalyst() == TEST_USER_ID
True
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, analyses)
[]
I can remove an analysis from the Worksheet:
>>> cu = filter(lambda an: an.getKeyword() == "Cu", analyses)[0]
>>> cu_uid = api.get_uid(cu)
>>> worksheet.removeAnalysis(cu)
>>> filter(lambda an: api.get_uid(an) == cu_uid, worksheet.getAnalyses())
[]
So the state of cu is now unassigned:
>>> api.get_workflow_status_of(cu)
'unassigned'
The Analyst is no longer assigned to the analysis:
>>> cu.getAssignedAnalyst()
''
From assigned state I can do submit:
>>> au = filter(lambda an: an.getKeyword() == "Au", analyses)[0]
>>> api.get_workflow_status_of(au)
'assigned'
>>> au.setResult(20)
>>> try_transition(au, "submit", "to_be_verified")
True
And the analysis transitions to to_be_verified:
>>> api.get_workflow_status_of(au)
'to_be_verified'
While keeping the Analyst that was assigned to the worksheet:
>>> au.getAnalyst() == TEST_USER_ID
True
And since there is still one analysis in the Worksheet not yet submitted, the
Worksheet remains in open state:
>>> api.get_workflow_status_of(worksheet)
'open'
But if I remove the remaining analysis, the status of the Worksheet is promoted
to to_be_verified, cause all the analyses assigned are in this state:
>>> fe = filter(lambda an: an.getKeyword() == "Fe", analyses)[0]
>>> worksheet.removeAnalysis(fe)
>>> fe.getWorksheet() is None
True
>>> api.get_workflow_status_of(fe)
'unassigned'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
In to_be_verified status, I cannot remove analyses:
>>> worksheet.removeAnalysis(au)
>>> map(lambda an: an.getKeyword(), worksheet.getAnalyses())
['Au']
>>> au.getWorksheetUID() == api.get_uid(worksheet)
True
>>> api.get_workflow_status_of(au)
'to_be_verified'
But I can still add more analyses:
>>> worksheet.addAnalysis(fe)
>>> filter(lambda an: an.getKeyword() == "Fe", worksheet.getAnalyses())
[<Analysis at /plone/clients/client-1/W-0001/Fe>]
Causing the Worksheet to roll back to open status:
>>> api.get_workflow_status_of(worksheet)
'open'
If I remove Fe analysis again, worksheet is promoted to to_be_verified:
>>> worksheet.removeAnalysis(fe)
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
Let’s create another worksheet and add the remaining analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.addAnalysis(cu)
>>> worksheet.addAnalysis(fe)
>>> sorted((map(lambda an: an.getKeyword(), worksheet.getAnalyses())))
['Cu', 'Fe']
The status of the analyses is now assigned:
>>> api.get_workflow_status_of(cu)
'assigned'
>>> api.get_workflow_status_of(fe)
'assigned'
And I cannot re-assign:
>>> isTransitionAllowed(cu, "assign")
False
Submit results:
>>> cu.setResult(12)
>>> fe.setResult(12)
>>> try_transition(cu, "submit", "to_be_verified")
True
>>> try_transition(fe, "submit", "to_be_verified")
True
State of the analyses and worksheet is to_be_verified:
>>> api.get_workflow_status_of(cu)
'to_be_verified'
>>> api.get_workflow_status_of(fe)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
Check permissions for Assign transition
Create an Analysis Request:
The status of the analysis is registered:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['registered']
But assign is not allowed unless we receive the Analysis Request so the
analysis is automatically transitioned to unassigned state:
>>> isTransitionAllowed(analysis, "assign")
False
>>> transitioned = do_action_for(ar, "receive")
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['unassigned']
Exactly these roles can assign:
>>> analysis = analyses[0]
>>> get_roles_for_permission("senaite.core: Transition: Assign Analysis", analysis)
['Analyst', 'LabClerk', 'LabManager', 'Manager']
Current user can assign because has the LabManager role:
>>> isTransitionAllowed(analysis, "assign")
True
Users with roles Analyst or LabClerk can assign too:
>>> setRoles(portal, TEST_USER_ID, ['Analyst',])
>>> isTransitionAllowed(analysis, "assign")
True
>>> setRoles(portal, TEST_USER_ID, ['LabClerk',])
>>> isTransitionAllowed(analysis, "assign")
True
Although other roles cannot:
>>> setRoles(portal, TEST_USER_ID, ['Authenticated', 'Owner'])
>>> isTransitionAllowed(analysis, "assign")
False
Reset settings:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
Analysis multi-verification guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisMultiVerify
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IVerified
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def submit_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Multiverify not allowed if multi-verification is not enabled
Enable the self verification:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Create an Analysis Request and submit results:
>>> ar = new_ar([Cu])
>>> submit_analyses(ar)
The status of the Analysis Request and its analyses is to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> analyses = ar.getAnalyses(full_objects=True)
>>> analysis = analyses[0]
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
I cannot multi verify the analysis because multi-verification is not set:
>>> isTransitionAllowed(analysis, "multi_verify")
False
>>> try_transition(analysis, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
But I can verify:
>>> isTransitionAllowed(analysis, "verify")
True
>>> try_transition(analysis, "verify", "verified")
True
And the status of the analysis and others is now verified:
>>> api.get_workflow_status_of(analysis)
'verified'
>>> api.get_workflow_status_of(ar)
'verified'
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Multiverify transition with multi-verification enabled
The system allows to set multiple verifiers, both at Setup or Analysis Service
level. If set, the analysis will transition to verified when the total number
of verifications equals to the value set in multiple-verifiers.
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Set the number of required verifications to 3:
>>> bikasetup.setNumberOfRequiredVerifications(3)
Set the multi-verification to “Not allow same user to verify multiple times”:
>>> bikasetup.setTypeOfmultiVerification('self_multi_disabled')
Create an Analysis Request, a worksheet and submit results:
>>> ar = new_ar([Cu])
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
>>> submit_analyses(ar)
The status of the Analysis Request, the Worksheet and analyses is
to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
I cannot verify:
>>> isTransitionAllowed(analysis, "verify")
False
>>> try_transition(analysis, "verify", "verified")
False
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
Because multi-verification is enabled:
>>> bikasetup.getNumberOfRequiredVerifications()
3
And there are 3 verifications remaining:
>>> analysis.getNumberOfRemainingVerifications()
3
But I can multi-verify:
>>> isTransitionAllowed(analysis, "multi_verify")
True
>>> try_transition(analysis, "multi_verify", "to_be_verified")
True
The status of the analysis and others is still to_be_verified:
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And my user id is recorded as such:
>>> action = api.get_review_history(analysis)[0]
>>> action['actor'] == TEST_USER_ID
True
And now, there are two verifications remaining:
>>> analysis.getNumberOfRemainingVerifications()
2
So, I cannot verify yet:
>>> isTransitionAllowed(analysis, "verify")
False
>>> try_transition(analysis, "verify", "verified")
False
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
But I cannot multi-verify neither, cause I am the same user who did the last
multi-verification:
>>> isTransitionAllowed(analysis, "multi_verify")
False
>>> try_transition(analysis, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
And the system is configured to not allow same user to verify multiple times:
>>> bikasetup.getTypeOfmultiVerification()
'self_multi_disabled'
But I can multi-verify if I change the type of multi-verification:
>>> bikasetup.setTypeOfmultiVerification('self_multi_enabled')
>>> isTransitionAllowed(analysis, "multi_verify")
True
>>> try_transition(analysis, "multi_verify", "to_be_verified")
True
The status of the analysis and others is still to_be_verified:
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And now, there is only verifications remaining:
>>> analysis.getNumberOfRemainingVerifications()
1
Since there is only one verification remaining, I cannot multi-verify again:
>>> isTransitionAllowed(analysis, "multi_verify")
False
>>> try_transition(analysis, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
But now, I can verify:
>>> isTransitionAllowed(analysis, "verify")
True
>>> try_transition(analysis, "verify", "verified")
True
There is no verifications remaining:
>>> analysis.getNumberOfRemainingVerifications()
0
And the status of the analysis and others is now verified:
>>> api.get_workflow_status_of(analysis)
'verified'
>>> api.get_workflow_status_of(ar)
'verified'
>>> api.get_workflow_status_of(worksheet)
'verified'
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Check permissions for Multi verify transition
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Set the number of required verifications to 3:
>>> bikasetup.setNumberOfRequiredVerifications(3)
Set the multi-verification to “Allow same user to verify multiple times”:
>>> bikasetup.setTypeOfmultiVerification('self_multi_enabled')
Create an Analysis Request and submit results:
>>> ar = new_ar([Cu])
>>> submit_analyses(ar)
The status of the Analysis Request and its analyses is to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['to_be_verified']
Exactly these roles can multi-verify:
>>> analysis = analyses[0]
>>> get_roles_for_permission("senaite.core: Transition: Verify", analysis)
['LabManager', 'Manager', 'Verifier']
Current user can multi-verify because has the LabManager role:
>>> isTransitionAllowed(analysis, "multi_verify")
True
Also if the user has the roles Manager or Verifier:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(analysis, "multi_verify")
True
>>> setRoles(portal, TEST_USER_ID, ['Verifier',])
>>> isTransitionAllowed(analysis, "multi_verify")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Analyst', 'Authenticated', 'LabClerk'])
>>> isTransitionAllowed(analysis, "multi_verify")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(analysis, "multi_verify")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(analysis, "multi_verify")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
And to ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
IVerified interface is provided by fully verified analyses
Analyses that have not been fully verified do not provide IVerified:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.setNumberOfRequiredVerifications(2)
>>> bikasetup.setTypeOfmultiVerification("self_multi_enabled")
>>> sample = new_ar([Cu])
>>> submit_analyses(sample)
>>> analysis = sample.getAnalyses(full_objects=True)[0]
>>> IVerified.providedBy(analysis)
False
>>> success = do_action_for(analysis, "multi_verify")
>>> IVerified.providedBy(analysis)
False
>>> success = do_action_for(analysis, "verify")
>>> IVerified.providedBy(analysis)
True
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.setNumberOfRequiredVerifications(1)
Analysis publication guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisPublish
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import getAllowedTransitions
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def submit_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def verify_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... do_action_for(analysis, "verify")
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> bikasetup.setSelfVerificationEnabled(True)
Publish transition and guard basic constraints
Create an Analysis Request, submit results and verify:
>>> ar = new_ar([Cu, Fe, Au])
>>> submit_analyses(ar)
>>> verify_analyses(ar)
>>> api.get_workflow_status_of(ar)
'verified'
I cannot publish the analyses individually:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> try_transition(analyses[0], "publish", "published")
False
>>> api.get_workflow_status_of(analyses[0])
'verified'
>>> try_transition(analyses[1], "publish", "published")
False
>>> api.get_workflow_status_of(analyses[1])
'verified'
>>> try_transition(analyses[2], "publish", "published")
False
>>> api.get_workflow_status_of(analyses[2])
'verified'
But if we publish the Analysis Request, analyses will follow:
>>> success = do_action_for(ar, "publish")
>>> api.get_workflow_status_of(ar)
'published'
>>> map(api.get_workflow_status_of, analyses)
['published', 'published', 'published']
Check permissions for Published state
In published state, exactly these roles can view results:
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> api.get_workflow_status_of(analysis)
'published'
>>> get_roles_for_permission("senaite.core: View Results", analysis)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'Owner', 'Publisher', 'RegulatoryInspector', 'Sampler', 'Verifier']
And no transition can be done from this state:
>>> getAllowedTransitions(analysis)
[]
Analysis retract guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisReject
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IRejected
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import getAllowedTransitions
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_duplicate(ar):
... worksheet = api.create(portal.worksheets, "Worksheet")
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... worksheet.addDuplicateAnalyses(1)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def submit_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Reject transition and guard basic constraints
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
Reject one of the analysis:
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> try_transition(analysis, "reject", "rejected")
True
The analysis state is now rejected while the AR remains in sample_received:
>>> api.get_workflow_status_of(analysis)
'rejected'
>>> api.get_workflow_status_of(ar)
'sample_received'
I cannot submit a result for the rejected analysis:
>>> analysis.setResult(12)
>>> try_transition(analysis, "submit", "to_be_verified")
False
>>> api.get_workflow_status_of(analysis)
'rejected'
>>> api.get_workflow_status_of(ar)
'sample_received'
Submit results for the rest of the analyses:
The status of the Analysis Request and its analyses is to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> analyses = ar.getAnalyses(full_objects=True)
>>> sorted(map(api.get_workflow_status_of, analyses))
['rejected', 'to_be_verified', 'to_be_verified']
Reject one of the analyses that are in ‘to_be_verified’ state:
>>> analysis = filter(lambda an: an != analysis, analyses)[0]
>>> try_transition(analysis, "reject", "rejected")
True
>>> api.get_workflow_status_of(analysis)
'rejected'
The Analysis Request remains in to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
I cannot ‘reject’ a verified analysis:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
>>> analysis = filter(lambda an: api.get_workflow_status_of(an) == "to_be_verified", analyses)[0]
>>> try_transition(analysis, "verify", "verified")
True
>>> try_transition(analysis, "reject", "rejected")
False
>>> api.get_workflow_status_of(analysis)
'verified'
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Rejection of an analysis causes the duplicates to be removed
When the analysis a duplicate comes from is rejected, the duplicate is rejected
too, regardless of its state.
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu, Fe, Au])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'open'
>>> ar_ans = ar.getAnalyses(full_objects=True)
>>> an_au = filter(lambda an: an.getKeyword() == 'Au', ar_ans)[0]
>>> an_cu = filter(lambda an: an.getKeyword() == 'Cu', ar_ans)[0]
>>> an_fe = filter(lambda an: an.getKeyword() == 'Fe', ar_ans)[0]
>>> duplicates = worksheet.getDuplicateAnalyses()
>>> du_au = filter(lambda dup: dup.getKeyword() == 'Au', duplicates)[0]
>>> du_cu = filter(lambda dup: dup.getKeyword() == 'Cu', duplicates)[0]
>>> du_fe = filter(lambda dup: dup.getKeyword() == 'Fe', duplicates)[0]
When the analysis Cu (to_be_verified) is rejected, the duplicate is removed:
>>> du_cu_uid = api.get_uid(du_cu)
>>> try_transition(an_cu, "reject", "rejected")
True
>>> du_cu in worksheet.getDuplicateAnalyses()
False
>>> api.get_object_by_uid(du_cu_uid, None) is None
True
Submit the result for duplicate Au and reject Au analysis afterwards:
>>> du_au_uid = api.get_uid(du_au)
>>> du_au.setResult(12)
>>> try_transition(du_au, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(du_au)
'to_be_verified'
>>> try_transition(an_au, "reject", "rejected")
True
>>> api.get_workflow_status_of(an_au)
'rejected'
>>> du_au in worksheet.getDuplicateAnalyses()
False
>>> api.get_object_by_uid(du_au_uid, None) is None
True
Submit and verify the result for duplicate Fe and reject Fe analysis:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> du_fe_uid = api.get_uid(du_fe)
>>> du_fe.setResult(12)
>>> try_transition(du_fe, "submit", "to_be_verified")
True
>>> try_transition(du_fe, "verify", "verified")
True
>>> try_transition(an_fe, "reject", "rejected")
True
>>> api.get_workflow_status_of(an_fe)
'rejected'
>>> du_fe in worksheet.getDuplicateAnalyses()
False
>>> api.get_object_by_uid(du_fe_uid, None) is None
True
>>> bikasetup.setSelfVerificationEnabled(False)
Rejection of analyses with dependents
When rejecting an analysis other analyses depends on (dependents), then the
rejection of a dependency causes the auto-rejection of its dependents.
Prepare a calculation that depends on Cu`and assign it to `Fe analysis:
>>> calc_fe = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Au')
>>> calc_au.setFormula("([Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu = filter(lambda an: an.getKeyword()=="Cu", analyses)[0]
>>> fe = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> au = filter(lambda an: an.getKeyword()=="Au", analyses)[0]
When Fe is rejected, Au analysis follows too:
>>> try_transition(fe, "reject", "rejected")
True
>>> api.get_workflow_status_of(fe)
'rejected'
>>> api.get_workflow_status_of(au)
'rejected'
While Cu analysis remains in unassigned state:
>>> api.get_workflow_status_of(cu)
'unassigned'
>>> api.get_workflow_status_of(ar)
'sample_received'
If we submit Cu and reject thereafter:
>>> cu.setResult(12)
>>> try_transition(cu, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> try_transition(cu, "reject", "rejected")
True
>>> api.get_workflow_status_of(cu)
'rejected'
The Analysis Request rolls-back to sample_received:
>>> api.get_workflow_status_of(ar)
'sample_received'
Reset calculations:
>>> Fe.setCalculation(None)
>>> Au.setCalculation(None)
Effects of rejection of analysis to Analysis Request
Rejection of analyses have implications in the Analysis Request workflow, cause
they will not be considered anymore in regular transitions of Analysis Request
that rely on the states of its analyses.
When an Analysis is rejected, the analysis is not considered on submit:
>>> ar = new_ar([Cu, Fe])
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu = filter(lambda an: an.getKeyword() == 'Cu', analyses)[0]
>>> fe = filter(lambda an: an.getKeyword() == 'Fe', analyses)[0]
>>> success = do_action_for(cu, "reject")
>>> api.get_workflow_status_of(cu)
'rejected'
>>> fe.setResult(12)
>>> success = do_action_for(fe, "submit")
>>> api.get_workflow_status_of(fe)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
Neither considered on verification:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> success = do_action_for(fe, "verify")
>>> api.get_workflow_status_of(fe)
'verified'
>>> api.get_workflow_status_of(ar)
'verified'
Neither considered on publish:
>>> success = do_action_for(ar, "publish")
>>> api.get_workflow_status_of(ar)
'published'
Reset self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
Rejection of retests
Create an Analysis Request, receive and submit all results:
>>> ar = new_ar([Cu, Fe, Au])
>>> success = do_action_for(ar, "receive")
>>> analyses = ar.getAnalyses(full_objects=True)
>>> for analysis in analyses:
... analysis.setResult(12)
... success = do_action_for(analysis, "submit")
>>> api.get_workflow_status_of(ar)
'to_be_verified'
Retract one of the analyses:
>>> analysis = analyses[0]
>>> success = do_action_for(analysis, "retract")
>>> api.get_workflow_status_of(analysis)
'retracted'
>>> api.get_workflow_status_of(ar)
'sample_received'
Reject the retest:
>>> retest = analysis.getRetest()
>>> success = do_action_for(retest, "reject")
>>> api.get_workflow_status_of(retest)
'rejected'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
Verify remaining analyses:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> success = do_action_for(analyses[1], "verify")
>>> success = do_action_for(analyses[2], "verify")
>>> bikasetup.setSelfVerificationEnabled(False)
>>> api.get_workflow_status_of(ar)
'verified'
Check permissions for Reject transition
Create an Analysis Request:
>>> ar = new_ar([Cu])
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> allowed_roles = ['LabManager', 'Manager']
>>> non_allowed_roles = ['Analyst', 'Authenticated', 'LabClerk', 'Owner',
... 'RegulatoryInspector', 'Sampler', 'Verifier']
In unassigned state
In unassigned state, exactly these roles can reject:
>>> api.get_workflow_status_of(analysis)
'unassigned'
>>> get_roles_for_permission("Reject", analysis)
['LabManager', 'Manager']
Current user can reject because has the LabManager role:
>>> isTransitionAllowed(analysis, "reject")
True
Also if the user has the role Manager:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(analysis, "reject")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, non_allowed_roles)
>>> isTransitionAllowed(analysis, "reject")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
In assigned state
In assigned state, exactly these roles can reject:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.addAnalysis(analysis)
>>> api.get_workflow_status_of(analysis)
'assigned'
>>> get_roles_for_permission("Reject", analysis)
['LabManager', 'Manager']
>>> isTransitionAllowed(analysis, "reject")
True
Current user can reject because has the LabManager role:
>>> isTransitionAllowed(analysis, "reject")
True
Also if the user has the role Manager:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(analysis, "reject")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, non_allowed_roles)
>>> isTransitionAllowed(analysis, "reject")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
In to_be_verified state
In to_be_verified state, exactly these roles can reject:
>>> analysis.setResult(13)
>>> success = do_action_for(analysis, "submit")
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
>>> get_roles_for_permission("Reject", analysis)
['LabManager', 'Manager']
>>> isTransitionAllowed(analysis, "reject")
True
Current user can reject because has the LabManager role:
>>> isTransitionAllowed(analysis, "reject")
True
Also if the user has the role Manager:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(analysis, "reject")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, non_allowed_roles)
>>> isTransitionAllowed(analysis, "reject")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
In retracted state
In retracted state, the analysis cannot be rejected:
>>> success = do_action_for(analysis, "retract")
>>> api.get_workflow_status_of(analysis)
'retracted'
>>> get_roles_for_permission("Reject", analysis)
[]
>>> isTransitionAllowed(analysis, "reject")
False
In verified state
In verified state, the analysis cannot be rejected:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> analysis = analysis.getRetest()
>>> analysis.setResult(12)
>>> success = do_action_for(analysis, "submit")
>>> success = do_action_for(analysis, "verify")
>>> api.get_workflow_status_of(analysis)
'verified'
>>> get_roles_for_permission("Reject", analysis)
[]
>>> isTransitionAllowed(analysis, "reject")
False
In published state
In published state, the analysis cannot be rejected:
>>> do_action_for(ar, "publish")
(True, '')
>>> api.get_workflow_status_of(analysis)
'published'
>>> get_roles_for_permission("Reject", analysis)
[]
>>> isTransitionAllowed(analysis, "reject")
False
In cancelled state
In cancelled state, the analysis cannot be rejected:
>>> ar = new_ar([Cu])
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> success = do_action_for(ar, "cancel")
>>> api.get_workflow_status_of(analysis)
'cancelled'
>>> get_roles_for_permission("Reject", analysis)
[]
>>> isTransitionAllowed(analysis, "reject")
False
Disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
Check permissions for Rejected state
In rejected state, exactly these roles can view results:
>>> ar = new_ar([Cu])
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> success = do_action_for(analysis, "reject")
>>> api.get_workflow_status_of(analysis)
'rejected'
>>> get_roles_for_permission("senaite.core: View Results", analysis)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'Publisher', 'RegulatoryInspector', 'Sampler', 'Verifier']
And no transition can be done from this state:
>>> getAllowedTransitions(analysis)
[]
IRejected interface is provided by rejected analyses
When rejected, routine analyses are marked with the IRejected interface:
>>> ar = new_ar([Cu])
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> IRejected.providedBy(analysis)
False
>>> success = do_action_for(analysis, "reject")
>>> IRejected.providedBy(analysis)
True
Analysis Request cancel guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisRequestCancel
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from bika.lims.workflow import getAllowedTransitions
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... return ar
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Cancel transition and guard basic constraints
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> api.get_workflow_status_of(ar)
'sample_due'
Cancel the Analysis Request:
>>> transitioned = do_action_for(ar, "cancel")
>>> api.get_workflow_status_of(ar)
'cancelled'
And all analyses the Analysis Request contains are cancelled too:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['cancelled', 'cancelled', 'cancelled']
At this point, only “reinstate” transition is possible:
>>> getAllowedTransitions(ar)
['reinstate']
When the Analysis Request is reinstated, it status becomes the previous before
the cancellation took place:
>>> transitioned = do_action_for(ar, "reinstate")
>>> api.get_workflow_status_of(ar)
'sample_due'
And the analyses are reinstated too:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['unassigned', 'unassigned', 'unassigned']
Receive the Analysis Request:
>>> transitioned = do_action_for(ar, "receive")
>>> api.get_workflow_status_of(ar)
'sample_received'
And we can cancel again:
>>> transitioned = do_action_for(ar, "cancel")
>>> api.get_workflow_status_of(ar)
'cancelled'
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['cancelled', 'cancelled', 'cancelled']
And reinstate:
>>> transitioned = do_action_for(ar, "reinstate")
>>> api.get_workflow_status_of(ar)
'sample_received'
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['unassigned', 'unassigned', 'unassigned']
Thus, the Analysis Request can be cancelled again:
>>> isTransitionAllowed(ar, "cancel")
True
But if we assign an analysis to a worksheet, the cancellation is no longer
possible:
>>> analysis = analyses[0]
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.addAnalysis(analysis)
>>> api.get_workflow_status_of(analysis)
'assigned'
>>> isTransitionAllowed(ar, "cancel")
False
But if we unassign the analysis, the transition is possible again:
>>> worksheet.removeAnalysis(analysis)
>>> api.get_workflow_status_of(analysis)
'unassigned'
>>> isTransitionAllowed(ar, "cancel")
True
If a result for any given analysis is submitted, the Analysis Request cannot be
transitioned to “cancelled” status:
>>> analysis.setResult(12)
>>> transitioned = do_action_for(analysis, "submit")
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
>>> isTransitionAllowed(ar, "cancel")
False
Analysis Request invalidate guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisRequestInvalidate
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IAnalysisRequestRetest
>>> from bika.lims.utils.analysis import create_analysis
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': DateTime(),
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... return ar
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(setup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Invalidate transition and guard basic constraints
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> ar
<AnalysisRequest at /plone/clients/client-1/W-0001>
Analysis Request cannot be invalidated when the status is sample_due:
>>> api.get_workflow_status_of(ar)
'sample_due'
>>> isTransitionAllowed(ar, "invalidate")
False
Analysis Request cannot be invalidated when the status is sample_received:
>>> success = do_action_for(ar, "receive")
>>> api.get_workflow_status_of(ar)
'sample_received'
>>> isTransitionAllowed(ar, "invalidate")
False
Submit all analyses:
>>> for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(12)
... success = do_action_for(analysis, "submit")
Analysis Request cannot be invalidated when status is to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> isTransitionAllowed(ar, "invalidate")
False
Verify all analyses:
>>> setup.setSelfVerificationEnabled(True)
>>> for analysis in ar.getAnalyses(full_objects=True):
... success = do_action_for(analysis, "verify")
>>> setup.setSelfVerificationEnabled(False)
Analysis Request can be invalidated if verified:
>>> api.get_workflow_status_of(ar)
'verified'
>>> isTransitionAllowed(ar, "invalidate")
True
When invalidated, a retest is created:
>>> success = do_action_for(ar, "invalidate")
>>> api.get_workflow_status_of(ar)
'invalid'
>>> retest = ar.getRetest()
>>> retest
<AnalysisRequest at /plone/clients/client-1/W-0001-R01>
And the retest provides IAnalysisRequestRetest interface:
>>> IAnalysisRequestRetest.providedBy(retest)
True
From the retest, I can go back to the invalidated Analysis Request:
>>> retest.getInvalidated()
<AnalysisRequest at /plone/clients/client-1/W-0001>
Invalidate a sample with multiple copies of same analysis
Create and receive an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> ar
<AnalysisRequest at /plone/clients/client-1/W-0002>
>>> success = do_action_for(ar, "receive")
>>> api.get_workflow_status_of(ar)
'sample_received'
Add another copy of existing analyses:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> for analysis in analyses:
... duplicate = create_analysis(ar, analysis)
>>> analyses = ar.getAnalyses(full_objects=True)
>>> sorted(map(api.get_id, analyses))
['Au', 'Au-1', 'Cu', 'Cu-1', 'Fe', 'Fe-1']
Submit and verify analyses:
>>> setup.setSelfVerificationEnabled(True)
>>> for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(12)
... submitted = do_action_for(analysis, "submit")
... verified = do_action_for(analysis, "verify")
>>> setup.setSelfVerificationEnabled(False)
Invalidate the sample:
>>> success = do_action_for(ar, "invalidate")
>>> api.get_workflow_status_of(ar)
'invalid'
>>> retest = ar.getRetest()
>>> retest
<AnalysisRequest at /plone/clients/client-1/W-0002-R01>
And the retest provides IAnalysisRequestRetest interface:
>>> IAnalysisRequestRetest.providedBy(retest)
True
From the retest, I can go back to the invalidated Analysis Request:
>>> retest.getInvalidated()
<AnalysisRequest at /plone/clients/client-1/W-0002>
Check permissions for Invalidate transition
Create an Analysis Request, receive, submit results and verify them:
>>> ar = new_ar([Cu])
>>> success = do_action_for(ar, "receive")
>>> setup.setSelfVerificationEnabled(True)
>>> for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(12)
... submitted = do_action_for(analysis, "submit")
... verified = do_action_for(analysis, "verify")
>>> setup.setSelfVerificationEnabled(False)
>>> api.get_workflow_status_of(ar)
'verified'
Exactly these roles can invalidate:
>>> get_roles_for_permission("senaite.core: Transition: Invalidate", ar)
['LabManager', 'Manager']
Current user can assign because has the LabManager role:
>>> isTransitionAllowed(ar, "invalidate")
True
User with other roles cannot:
>>> setRoles(portal, TEST_USER_ID, ['Analyst', 'Authenticated', 'LabClerk', 'Owner'])
>>> isTransitionAllowed(analysis, "invalidate")
False
Reset settings:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
Analysis Request sample guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisRequestSample
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from bika.lims.workflow import getAllowedTransitions
>>> from DateTime import DateTime
>>> from plone import api as ploneapi
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def new_ar(services, ar_template=None):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SampleType': sampletype.UID(),
... 'Template': ar_template,
... }
... date_key = "DateSampled"
... if ar_template and ar_template.getSamplingRequired():
... date_key = "SamplingDate"
... elif bikasetup.getSamplingWorkflowEnabled():
... date_key = "SamplingDate"
... values[date_key] = timestamp()
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... return ar
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
>>> def roles_for_transition_check(transition_id, roles, object):
... granted = list()
... ungranted = list()
... for role in roles:
... setRoles(portal, TEST_USER_ID, [role])
... if isTransitionAllowed(object, transition_id):
... granted.append(role)
... else:
... ungranted.append(role)
... setRoles(portal, TEST_USER_ID, ['LabManager',])
... return granted, ungranted
>>> def are_roles_for_transition_granted(transition_id, roles, object):
... gr, ungr = roles_for_transition_check(transition_id, roles, object)
... return len(ungr) == 0 and len(gr) > 0
>>> def are_roles_for_transition_ungranted(transition_id, roles, object):
... gr, ungr = roles_for_transition_check(transition_id, roles, object)
... return len(gr) == 0 and len(ungr) > 0
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = timestamp()
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> ar_template = api.create(bikasetup.bika_artemplates, "ARTemplate", title="Test Template", SampleType=sampletype)
>>> sampler_user = ploneapi.user.create(email="sampler1@example.com", username="sampler1", password="secret", properties=dict(fullname="Sampler 1"))
>>> setRoles(portal, "sampler1", ['Authenticated', 'Member', 'Sampler'])
Sample transition and guard basic constraints
Create an Analysis Request:
By default, the Analysis Request transitions to “sample_due” status:
>>> api.get_workflow_status_of(ar)
'sample_due'
And from this status, the transition “sample” is not possible:
>>> isTransitionAllowed(ar, "sample")
False
If the value for setup setting “SamplingWorkflowEnabled” is True, the status
of the Analysis Request once created is “to_be_sampled”:
>>> bikasetup.setSamplingWorkflowEnabled(True)
>>> ar = new_ar([Cu])
>>> api.get_workflow_status_of(ar)
'to_be_sampled'
But the transition is still not possible:
>>> isTransitionAllowed(ar, "sample")
False
Because we haven’t set neither a Sampler nor the date the sample was collected:
>>> date_sampled = timestamp()
>>> ar.setDateSampled(date_sampled)
>>> isTransitionAllowed(ar, "sample")
False
>>> ar.setSampler(sampler_user.id)
>>> isTransitionAllowed(ar, "sample")
True
When “sample” transition is performed, the status becomes “sample_due”:
>>> success = do_action_for(ar, "sample")
>>> api.get_workflow_status_of(ar)
'sample_due'
And the values for DateSampled and Sampler are kept:
>>> ar.getSampler() == sampler_user.id
True
>>> ar.getDateSampled().strftime("%Y-%m-%d") == date_sampled
True
Check permissions for sample transition
Declare the roles allowed and not allowed to perform the “sample” transition:
>>> all_roles = portal.acl_users.portal_role_manager.validRoles()
>>> allowed = ["LabManager", "Manager", "Sampler", "SamplingCoordinator"]
>>> not_allowed = filter(lambda role: role not in allowed, all_roles)
Create an Analysis Request by using a template with Sampling workflow enabled:
>>> bikasetup.setSamplingWorkflowEnabled(False)
>>> ar_template.setSamplingRequired(True)
>>> ar = new_ar([Cu], ar_template)
>>> ar.setDateSampled(timestamp())
>>> ar.setSampler(sampler_user.id)
Exactly these roles can Sample:
>>> get_roles_for_permission("senaite.core: Transition: Sample Sample", ar)
['LabManager', 'Manager', 'Sampler', 'SamplingCoordinator']
Current user can sample because has the LabManager role:
>>> isTransitionAllowed(ar, "sample")
True
The user can sample if has any of the granted roles:
>>> are_roles_for_transition_granted("sample", allowed, ar)
True
But not if the user has the rest of the roles:
>>> are_roles_for_transition_ungranted("sample", not_allowed, ar)
True
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
Analysis Request to_be_sampled guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisRequestToBeSampled
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from bika.lims.workflow import getAllowedTransitions
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def new_ar(services, ar_template=None):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID(),
... 'Template': ar_template,
... }
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... return ar
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = timestamp()
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> ar_template = api.create(bikasetup.bika_artemplates, "ARTemplate", title="Test Template", SampleType=sampletype)
To_be_sampled transition and guard basic constraints
Create an Analysis Request:
By default, the Analysis Request transitions to “sample_due” status:
>>> api.get_workflow_status_of(ar)
'sample_due'
But if the setup setting “SamplingWorkflowEnabled” is set to True, the status
of the Analysis Request once created is “to_be_sampled”:
>>> bikasetup.setSamplingWorkflowEnabled(True)
>>> ar = new_ar([Cu])
>>> api.get_workflow_status_of(ar)
'to_be_sampled'
If we use a template with “SamplingRequired” setting set to False, the status
of the Analysis Request once created is “sample_due”, regardless of the setting
from setup:
>>> ar_template.setSamplingRequired(False)
>>> ar = new_ar([Cu], ar_template)
>>> api.get_workflow_status_of(ar)
'sample_due'
And same the other way round:
>>> bikasetup.setSamplingWorkflowEnabled(False)
>>> ar_template.setSamplingRequired(True)
>>> ar = new_ar([Cu], ar_template)
>>> api.get_workflow_status_of(ar)
'to_be_sampled'
Analysis retest guard and event
Running this test from the buildout directory:
bin/test -t WorkflowAnalysisRetest
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def submit_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(setup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> setup.setSelfVerificationEnabled(True)
Retest transition and guard basic constraints
Create an Analysis Request and submit results:
>>> ar = new_ar([Cu, Fe, Au])
We cannot retest analyses if no results have been submitted yet:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> analysis = analyses[0]
>>> isTransitionAllowed(analysis, "retest")
False
>>> submit_analyses(ar)
The retest transition can be done now, cause the status of the analysis is
to_be_verified:
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
>>> isTransitionAllowed(analysis, "retest")
True
When a retest transition is performed, a copy of the original analysis is
created (the “retest”) and the original analysis is transitioned to verified:
>>> analysis = analyses[0]
>>> try_transition(analysis, "retest", "verified")
True
>>> api.get_workflow_status_of(analysis)
'verified'
>>> analyses = ar.getAnalyses(full_objects=True)
>>> sorted(map(api.get_workflow_status_of, analyses))
['to_be_verified', 'to_be_verified', 'unassigned', 'verified']
Since there is one new analysis (the “retest”) in unassigned status, the
Analysis Request is transitioned to sample_received:
>>> api.get_workflow_status_of(ar)
'sample_received'
The “retest” is a copy of original analysis:
>>> retest = filter(lambda an: api.get_workflow_status_of(an) == "unassigned", analyses)[0]
>>> analysis.getRetest() == retest
True
>>> retest.getRetestOf() == analysis
True
>>> retest.getKeyword() == analysis.getKeyword()
True
But it does not keep the result:
>>> not retest.getResult()
True
And Result capture date is None:
>>> not retest.getResultCaptureDate()
True
If I submit a result for the “retest”:
>>> retest.setResult(analysis.getResult())
>>> try_transition(retest, "submit", "to_be_verified")
True
The status of both the analysis and the Analysis Request is “to_be_verified”:
>>> api.get_workflow_status_of(retest)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
And I can even ask for a retest of the retest:
>>> try_transition(retest, "retest", "verified")
True
>>> api.get_workflow_status_of(retest)
'verified'
A new “retest” in unassigned state is created and the sample rolls back to
sample_received status:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> sorted(map(api.get_workflow_status_of, analyses))
['to_be_verified', 'to_be_verified', 'unassigned', 'verified', 'verified']
>>> api.get_workflow_status_of(ar)
'sample_received'
Auto-rollback of Worksheet on analysis retest
The retesting of an analysis from a Worksheet that is in “to_be_verified” state
causes the worksheet to rollback to “open” state.
Create an Analysis Request and submit results:
>>> ar = new_ar([Cu, Fe, Au])
Create a new Worksheet, assign all analyses and submit:
>>> ws = api.create(portal.worksheets, "Worksheet")
>>> for analysis in ar.getAnalyses(full_objects=True):
... ws.addAnalysis(analysis)
>>> submit_analyses(ar)
The state for both the Analysis Request and Worksheet is “to_be_verified”:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(ws)
'to_be_verified'
Retest one analysis:
>>> analysis = ws.getAnalyses()[0]
>>> try_transition(analysis, "retest", "verified")
True
A rollback of the state of Analysis Request and Worksheet takes place:
>>> api.get_workflow_status_of(ar)
'sample_received'
>>> api.get_workflow_status_of(ws)
'open'
And both contain an additional analysis:
>>> len(ar.getAnalyses())
4
>>> len(ws.getAnalyses())
4
The state of this additional analysis, the “retest”, is assigned:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> retest = filter(lambda an: api.get_workflow_status_of(an) == "assigned", analyses)[0]
>>> retest.getKeyword() == analysis.getKeyword()
True
>>> retest in ws.getAnalyses()
True
Retest of an analysis with dependents
Retesting an analysis that depends on other analyses (dependents), forces the
dependents to be retested too:
Prepare a calculation that depends on Cu and assign it to Fe analysis:
>>> calc_fe = api.create(setup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(setup.bika_calculations, 'Calculation', title='Calc for Au')
>>> calc_au.setFormula("([Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu_analysis = filter(lambda an: an.getKeyword()=="Cu", analyses)[0]
>>> fe_analysis = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> au_analysis = filter(lambda an: an.getKeyword()=="Au", analyses)[0]
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> cu_analysis.setResult(20)
>>> fe_analysis.setResult(12)
>>> au_analysis.setResult(10)
Submit Au analysis and the rest will follow:
>>> try_transition(au_analysis, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(au_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(fe_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(cu_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
If I retest Fe, Au analysis is transitioned to verified and retested too:
>>> try_transition(fe_analysis, "retest", "verified")
True
>>> api.get_workflow_status_of(fe_analysis)
'verified'
>>> api.get_workflow_status_of(au_analysis)
'verified'
As well as Cu analysis, that is a dependency of Fe:
>>> api.get_workflow_status_of(cu_analysis)
'verified'
Hence, three new “retests” are generated in accordance:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> len(analyses)
6
>>> au_analyses = filter(lambda an: an.getKeyword()=="Au", analyses)
>>> sorted(map(api.get_workflow_status_of, au_analyses))
['unassigned', 'verified']
>>> fe_analyses = filter(lambda an: an.getKeyword()=="Fe", analyses)
>>> sorted(map(api.get_workflow_status_of, fe_analyses))
['unassigned', 'verified']
>>> cu_analyses = filter(lambda an: an.getKeyword()=="Cu", analyses)
>>> sorted(map(api.get_workflow_status_of, cu_analyses))
['unassigned', 'verified']
And the current state of the Analysis Request is sample_received now:
>>> api.get_workflow_status_of(ar)
'sample_received'
Retest of an analysis with dependencies hierarchy (recursive up)
Retesting an analysis with dependencies should end-up with retests for all them,
regardless of their position in the hierarchy of dependencies. The system works
recursively up, finding out all dependencies.
Prepare a calculation that depends on Cu and assign it to Fe analysis:
>>> calc_fe = api.create(setup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(setup.bika_calculations, 'Calculation', title='Calc for Au')
>>> calc_au.setFormula("([Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu_analysis = filter(lambda an: an.getKeyword()=="Cu", analyses)[0]
>>> fe_analysis = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> au_analysis = filter(lambda an: an.getKeyword()=="Au", analyses)[0]
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> cu_analysis.setResult(20)
>>> fe_analysis.setResult(12)
>>> au_analysis.setResult(10)
Submit Au analysis and the rest will follow:
>>> try_transition(au_analysis, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(au_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(fe_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(cu_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
If I retest Au, Fe analysis is transitioned to verified and retested too:
>>> try_transition(au_analysis, "retest", "verified")
True
>>> api.get_workflow_status_of(fe_analysis)
'verified'
>>> api.get_workflow_status_of(au_analysis)
'verified'
As well as Cu analysis, that is a dependency of Fe:
>>> api.get_workflow_status_of(cu_analysis)
'verified'
Hence, three new “retests” are generated in accordance:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> len(analyses)
6
>>> au_analyses = filter(lambda an: an.getKeyword()=="Au", analyses)
>>> sorted(map(api.get_workflow_status_of, au_analyses))
['unassigned', 'verified']
>>> fe_analyses = filter(lambda an: an.getKeyword()=="Fe", analyses)
>>> sorted(map(api.get_workflow_status_of, fe_analyses))
['unassigned', 'verified']
>>> cu_analyses = filter(lambda an: an.getKeyword()=="Cu", analyses)
>>> sorted(map(api.get_workflow_status_of, cu_analyses))
['unassigned', 'verified']
And the current state of the Analysis Request is sample_received now:
>>> api.get_workflow_status_of(ar)
'sample_received'
Retest of an analysis with dependents hierarchy (recursive down)
Retesting an analysis with dependents should end-up with retests for all them,
regardless of their position in the hierarchy of dependents. The system works
recursively down, finding out all dependents.
Prepare a calculation that depends on Cu and assign it to Fe analysis:
>>> calc_fe = api.create(setup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(setup.bika_calculations, 'Calculation', title='Calc for Au')
>>> calc_au.setFormula("([Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu_analysis = filter(lambda an: an.getKeyword()=="Cu", analyses)[0]
>>> fe_analysis = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> au_analysis = filter(lambda an: an.getKeyword()=="Au", analyses)[0]
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> cu_analysis.setResult(20)
>>> fe_analysis.setResult(12)
>>> au_analysis.setResult(10)
Submit Au analysis and the rest will follow:
>>> try_transition(au_analysis, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(au_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(fe_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(cu_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
If I retest Cu, Fe analysis is transitioned to verified and retested too:
>>> try_transition(cu_analysis, "retest", "verified")
True
>>> api.get_workflow_status_of(cu_analysis)
'verified'
>>> api.get_workflow_status_of(fe_analysis)
'verified'
As well as Au analysis, that is a dependent of Fe:
>>> api.get_workflow_status_of(au_analysis)
'verified'
Hence, three new “retests” are generated in accordance:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> len(analyses)
6
>>> au_analyses = filter(lambda an: an.getKeyword()=="Au", analyses)
>>> sorted(map(api.get_workflow_status_of, au_analyses))
['unassigned', 'verified']
>>> fe_analyses = filter(lambda an: an.getKeyword()=="Fe", analyses)
>>> sorted(map(api.get_workflow_status_of, fe_analyses))
['unassigned', 'verified']
>>> cu_analyses = filter(lambda an: an.getKeyword()=="Cu", analyses)
>>> sorted(map(api.get_workflow_status_of, cu_analyses))
['unassigned', 'verified']
And the current state of the Analysis Request is sample_received now:
>>> api.get_workflow_status_of(ar)
'sample_received'
Analysis retract guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisRetract
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IRetracted
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def submit_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Retract transition and guard basic constraints
Create an Analysis Request and submit results:
>>> ar = new_ar([Cu, Fe, Au])
>>> submit_analyses(ar)
The status of the Analysis Request and its analyses is to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['to_be_verified', 'to_be_verified', 'to_be_verified']
Retract one of the analyses:
>>> analysis = analyses[0]
>>> try_transition(analysis, "retract", "retracted")
True
>>> api.get_workflow_status_of(analysis)
'retracted'
And one new additional analysis has been added in unassigned state:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> sorted(map(api.get_workflow_status_of, analyses))
['retracted', 'to_be_verified', 'to_be_verified', 'unassigned']
And the Analysis Request has been transitioned to sample_received:
>>> api.get_workflow_status_of(ar)
'sample_received'
The new analysis is a copy of retracted one:
>>> retest = filter(lambda an: api.get_workflow_status_of(an) == "unassigned", analyses)[0]
>>> analysis.getRetest() == retest
True
>>> retest.getRetestOf() == analysis
True
>>> retest.getKeyword() == analysis.getKeyword()
True
But it does not keep the result:
>>> not retest.getResult()
True
And Result capture date is None:
>>> not retest.getResultCaptureDate()
True
If I submit the result for the new analysis:
>>> retest.setResult(analysis.getResult())
>>> try_transition(retest, "submit", "to_be_verified")
True
The status of both the analysis and the Analysis Request is “to_be_verified”:
>>> api.get_workflow_status_of(retest)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
And I can even retract the retest:
>>> try_transition(retest, "retract", "retracted")
True
>>> api.get_workflow_status_of(retest)
'retracted'
And one new additional analysis has been added in unassigned state:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> sorted(map(api.get_workflow_status_of, analyses))
['retracted', 'retracted', 'to_be_verified', 'to_be_verified', 'unassigned']
And again, the Analysis Request has been transitioned to sample_received:
>>> api.get_workflow_status_of(ar)
'sample_received'
Auto-rollback of Worksheet on analysis retraction
When retracting an analysis from a Worksheet that is in “to_be_verified” state
causes the rollback of the worksheet to “open” state.
Create an Analysis Request and submit results:
>>> ar = new_ar([Cu, Fe, Au])
Create a new Worksheet, assign all analyses and submit:
>>> ws = api.create(portal.worksheets, "Worksheet")
>>> for analysis in ar.getAnalyses(full_objects=True):
... ws.addAnalysis(analysis)
>>> submit_analyses(ar)
The state for both the Analysis Request and Worksheet is “to_be_verified”:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(ws)
'to_be_verified'
Retract one analysis:
>>> analysis = ws.getAnalyses()[0]
>>> try_transition(analysis, "retract", "retracted")
True
A rollback of the state of Analysis Request and Worksheet takes place:
>>> api.get_workflow_status_of(ar)
'sample_received'
>>> api.get_workflow_status_of(ws)
'open'
And both contain an additional analysis:
>>> len(ar.getAnalyses())
4
>>> len(ws.getAnalyses())
4
The state of this additional analysis, the retest, is “assigned”:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> retest = filter(lambda an: api.get_workflow_status_of(an) == "assigned", analyses)[0]
>>> retest.getKeyword() == analysis.getKeyword()
True
>>> retest in ws.getAnalyses()
True
Retraction of results for analyses with dependents
When retracting an analysis other analyses depends on (dependents), then the
retraction of a dependency causes the auto-retraction of its dependents.
Dependencies are retracted too.
Prepare a calculation that depends on Cu and assign it to Fe analysis:
>>> calc_fe = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Au')
>>> calc_au.setFormula("([Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu_analysis = filter(lambda an: an.getKeyword()=="Cu", analyses)[0]
>>> fe_analysis = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> au_analysis = filter(lambda an: an.getKeyword()=="Au", analyses)[0]
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> cu_analysis.setResult(20)
>>> fe_analysis.setResult(12)
>>> au_analysis.setResult(10)
Submit Au analysis and the rest will follow:
>>> try_transition(au_analysis, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(au_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(fe_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(cu_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
If I retract Fe, Au analysis is retracted automatically too:
>>> try_transition(fe_analysis, "retract", "retracted")
True
>>> api.get_workflow_status_of(fe_analysis)
'retracted'
>>> api.get_workflow_status_of(au_analysis)
'retracted'
As well as Cu analysis (a dependency of Fe):
>>> api.get_workflow_status_of(cu_analysis)
'retracted'
Hence, three new analyses are generated in accordance:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> len(analyses)
6
>>> au_analyses = filter(lambda an: an.getKeyword()=="Au", analyses)
>>> sorted(map(api.get_workflow_status_of, au_analyses))
['retracted', 'unassigned']
>>> fe_analyses = filter(lambda an: an.getKeyword()=="Fe", analyses)
>>> sorted(map(api.get_workflow_status_of, fe_analyses))
['retracted', 'unassigned']
>>> cu_analyses = filter(lambda an: an.getKeyword()=="Cu", analyses)
>>> sorted(map(api.get_workflow_status_of, cu_analyses))
['retracted', 'unassigned']
And the current state of the Analysis Request is sample_received now:
>>> api.get_workflow_status_of(ar)
'sample_received'
IRetracted interface is provided by retracted analyses
When retracted, routine analyses are marked with the IRetracted interface:
>>> sample = new_ar([Cu])
>>> submit_analyses(sample)
>>> analysis = sample.getAnalyses(full_objects=True)[0]
>>> IRetracted.providedBy(analysis)
False
>>> success = do_action_for(analysis, "retract")
>>> IRetracted.providedBy(analysis)
True
But the retest does not provide IRetracted:
>>> retest = analysis.getRetest()
>>> IRetracted.providedBy(retest)
False
Retract an analysis with a result that is a Detection Limit
Allow the user to manually enter the detection limit as the result:
>>> Cu.setAllowManualDetectionLimit(True)
Create the sample:
>>> sample = new_ar([Cu])
>>> cu = sample.getAnalyses(full_objects=True)[0]
>>> cu.setResult("< 10")
>>> success = do_action_for(cu, "submit")
>>> cu.getResult()
'10'
>>> cu.getFormattedResult(html=False)
'< 10'
>>> cu.isLowerDetectionLimit()
True
>>> cu.getDetectionLimitOperand()
'<'
The Detection Limit is not kept on the retest:
>>> success = do_action_for(analysis, "retract")
>>> retest = analysis.getRetest()
>>> retest.getResult()
''
>>> retest.getFormattedResult(html=False)
''
>>> retest.isLowerDetectionLimit()
False
>>> retest.getDetectionLimitOperand()
''
Do the same with Upper Detection Limit (UDL):
>>> sample = new_ar([Cu])
>>> cu = sample.getAnalyses(full_objects=True)[0]
>>> cu.setResult("> 10")
>>> success = do_action_for(cu, "submit")
>>> cu.getResult()
'10'
>>> cu.getFormattedResult(html=False)
'> 10'
>>> cu.isUpperDetectionLimit()
True
>>> cu.getDetectionLimitOperand()
'>'
The Detection Limit is not kept on the retest:
>>> success = do_action_for(analysis, "retract")
>>> retest = analysis.getRetest()
>>> retest.getResult()
''
>>> retest.getFormattedResult(html=False)
''
>>> retest.isUpperDetectionLimit()
False
>>> retest.getDetectionLimitOperand()
''
Analysis submission guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisSubmit
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager', 'Sampler'])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Basic constraints for Analysis submission
Create an Analysis Request:
>>> values = {'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
>>> service_uids = map(api.get_uid, [Cu])
>>> ar = create_analysisrequest(client, request, values, service_uids)
Cannot submit if the Analysis Request has not been yet received:
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> analysis.setResult(12)
>>> isTransitionAllowed(analysis, "submit")
False
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'registered'
But if I receive the Analysis Request:
>>> transitioned = do_action_for(ar, "receive")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(ar)
'sample_received'
I can then submit the analysis:
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
And I cannot resubmit the analysis:
>>> isTransitionAllowed(analysis, "submit")
False
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
Basic constraints for “field” Analysis submission
Set analysis Cu with Point of Capture “field”:
>>> Cu.setPointOfCapture("field")
>>> Cu.getPointOfCapture()
'field'
And activate sampling workflow:
>>> bikasetup.setSamplingWorkflowEnabled(True)
>>> bikasetup.getSamplingWorkflowEnabled()
True
Create an Analysis Request:
>>> values = {'Client': client.UID(),
... 'Contact': contact.UID(),
... 'SampleType': sampletype.UID()}
>>> service_uids = map(api.get_uid, [Cu, Fe])
>>> ar = create_analysisrequest(client, request, values, service_uids)
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu = filter(lambda an: an.getKeyword() == "Cu", analyses)[0]
>>> fe = filter(lambda an: an.getKeyword() == "Fe", analyses)[0]
Cannot submit Cu, because the Analysis Request has not been yet sampled:
>>> cu.setResult(12)
>>> isTransitionAllowed(cu, "submit")
False
>>> api.get_workflow_status_of(ar)
'to_be_sampled'
I cannot submit Fe neither, cause the Analysis Request has not been received:
>>> fe.setResult(12)
>>> isTransitionAllowed(fe, "submit")
False
If I sample the Analysis Request:
>>> ar.setDateSampled(timestamp())
>>> ar.setSampler(TEST_USER_ID)
>>> transitioned = do_action_for(ar, "sample")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(ar)
'sample_due'
Then I can submit Cu:
>>> transitioned = do_action_for(cu, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(cu)
'to_be_verified'
But cannot submit Fe:
>>> cu.setResult(12)
>>> isTransitionAllowed(fe, "submit")
False
Unless I receive the Analysis Request:
>>> transitioned = do_action_for(ar, "receive")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(ar)
'sample_received'
>>> transitioned = do_action_for(fe, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(fe)
'to_be_verified'
And I cannot resubmit again:
>>> isTransitionAllowed(cu, "submit")
False
>>> isTransitionAllowed(fe, "submit")
False
Deactivate the workflow sampling and rest Cu as a lab analysis:
>>> Cu.setPointOfCapture("lab")
>>> bikasetup.setSamplingWorkflowEnabled(False)
Auto submission of Analysis Requests when all its analyses are submitted
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
Set results for some of the analyses only:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> analyses[0].setResult('12')
>>> analyses[1].setResult('12')
We’ve set some results, but all analyses are still in unassigned:
>>> map(api.get_workflow_status_of, analyses)
['unassigned', 'unassigned', 'unassigned']
Transition some of them:
>>> transitioned = do_action_for(analyses[0], "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analyses[0])
'to_be_verified'
>>> transitioned = do_action_for(analyses[1], "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analyses[1])
'to_be_verified'
The Analysis Request status is still in sample_received:
>>> api.get_workflow_status_of(ar)
'sample_received'
If we try to transition the remaining analysis w/o result, nothing happens:
>>> transitioned = do_action_for(analyses[2], "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analyses[2])
'unassigned'
>>> api.get_workflow_status_of(ar)
'sample_received'
Even if we try with an empty or None result:
>>> analyses[2].setResult('')
>>> transitioned = do_action_for(analyses[2], "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analyses[2])
'unassigned'
>>> analyses[2].setResult(None)
>>> transitioned = do_action_for(analyses[2], "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analyses[2])
'unassigned'
But will work if we try with a result of 0:
>>> analyses[2].setResult(0)
>>> transitioned = do_action_for(analyses[2], "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analyses[2])
'to_be_verified'
And the AR will follow:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
And we cannot re-submit analyses that have been already submitted:
>>> transitioned = do_action_for(analyses[2], "submit")
>>> transitioned[0]
False
Auto submission of a Worksheets when all its analyses are submitted
The same behavior as for Analysis Requests applies to the worksheet when all its
analyses are submitted.
Create two Analysis Requests:
>>> ar0 = new_ar([Cu, Fe, Au])
>>> ar1 = new_ar([Cu, Fe])
Create a worksheet:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
And assign all the analyses from the Analysis Requests created before, except
Au from the first Analysis Request:
>>> analyses_ar0 = ar0.getAnalyses(full_objects=True)
>>> analyses_ar1 = ar1.getAnalyses(full_objects=True)
>>> analyses = filter(lambda an: an.getKeyword() != 'Au', analyses_ar0)
>>> analyses += analyses_ar1
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
Set results and submit all analyses from the worksheet except one:
>>> ws_analyses = worksheet.getAnalyses()
>>> analysis_1 = analyses[0]
>>> analysis_2 = analyses[1]
>>> analysis_3 = analyses[2]
>>> analysis_4 = analyses[3]
>>> analysis_2.setResult('5')
>>> transitioned = do_action_for(analysis_2, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_2)
'to_be_verified'
>>> analysis_3.setResult('6')
>>> transitioned = do_action_for(analysis_3, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_3)
'to_be_verified'
>>> analysis_4.setResult('7')
>>> transitioned = do_action_for(analysis_4, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_4)
'to_be_verified'
The Analysis Request number 1 has been automatically transitioned because all
the contained analyses have been submitted:
>>> api.get_workflow_status_of(ar1)
'to_be_verified'
While Analysis Request number 0 has not been transitioned because still have two
analyses with results pending:
>>> api.get_workflow_status_of(ar0)
'sample_received'
And same with worksheet, cause there is one result pending:
>>> api.get_workflow_status_of(worksheet)
'open'
If we set a result for the pending analysis:
>>> analysis_1.setResult('9')
>>> transitioned = do_action_for(analysis_1, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis_1)
'to_be_verified'
The worksheet will follow:
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
But the Analysis Request number 0 will remain sample_received:
>>> api.get_workflow_status_of(ar0)
'sample_received'
Unless we submit a result for Au analysis:
>>> au_an = filter(lambda an: an.getKeyword() == 'Au', analyses_ar0)[0]
>>> au_an.setResult('10')
>>> transitioned = do_action_for(au_an, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(au_an)
'to_be_verified'
>>> api.get_workflow_status_of(ar0)
'to_be_verified'
Submission of results for analyses with interim fields set
For an analysis to be submitted successfully, it must have a result set, but if
the analysis have interim fields, they are mandatory too:
>>> Au.setInterimFields([
... {"keyword": "interim_1", "title": "Interim 1",},
... {"keyword": "interim_2", "title": "Interim 2",}])
Create an Analysis Request:
>>> ar = new_ar([Au])
>>> analysis = ar.getAnalyses(full_objects=True)[0]
Cannot submit if no result is set:
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'unassigned'
But even if we set a result, we cannot submit because interims are missing:
>>> analysis.setResult(12)
>>> analysis.getResult()
'12'
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'unassigned'
So, if the analysis has interims defined, all them are required too:
>>> analysis.setInterimValue("interim_1", 15)
>>> analysis.getInterimValue("interim_1")
'15'
>>> analysis.getInterimValue("interim_2")
''
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'unassigned'
Even if we set a non-valid (None, empty value) to an interim:
>>> analysis.setInterimValue("interim_2", None)
>>> analysis.getInterimValue("interim_2")
''
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'unassigned'
>>> analysis.setInterimValue("interim_2", '')
>>> analysis.getInterimValue("interim_2")
''
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'unassigned'
But it will work if the value is 0:
>>> analysis.setInterimValue("interim_2", 0)
>>> analysis.getInterimValue("interim_2")
'0'
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
And the Analysis Request follow:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
Might happen the other way round. We set interims but not a result:
>>> ar = new_ar([Au])
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> analysis.setInterimValue("interim_1", 10)
>>> analysis.setInterimValue("interim_2", 20)
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'unassigned'
Still, the result is required:
>>> analysis.setResult(12)
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
And again, the Analysis Request will follow:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
Submission of results for analyses with interim calculation
If an analysis have a calculation assigned, the result will be calculated
automatically based on the calculation. If the calculation have interims set,
only those that do not have a default value set will be required.
Prepare the calculation and set the calculation to analysis Au:
>>> Au.setInterimFields([])
>>> calc = api.create(bikasetup.bika_calculations, 'Calculation', title='Test Calculation')
>>> interim_1 = {'keyword': 'IT1', 'title': 'Interim 1', 'value': 10}
>>> interim_2 = {'keyword': 'IT2', 'title': 'Interim 2', 'value': 2}
>>> interim_3 = {'keyword': 'IT3', 'title': 'Interim 3', 'value': ''}
>>> interim_4 = {'keyword': 'IT4', 'title': 'Interim 4', 'value': None}
>>> interim_5 = {'keyword': 'IT5', 'title': 'Interim 5'}
>>> interims = [interim_1, interim_2, interim_3, interim_4, interim_5]
>>> calc.setInterimFields(interims)
>>> calc.setFormula("[IT1]+[IT2]+[IT3]+[IT4]+[IT5]")
>>> Au.setCalculation(calc)
Create an Analysis Request:
>>> ar = new_ar([Au])
>>> analysis = ar.getAnalyses(full_objects=True)[0]
Cannot submit if no result is set:
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'unassigned'
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> analysis.setResult("12")
Set a value for interim IT5:
>>> analysis.setInterimValue("IT5", 5)
Cannot transition because IT3 and IT4 have None/empty values as default:
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'unassigned'
Let’s set a value for those interims:
>>> analysis.setInterimValue("IT3", 3)
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(analysis)
'unassigned'
>>> analysis.setInterimValue("IT4", 4)
Since interims IT1 and IT2 have default values set, the analysis will submit:
>>> transitioned = do_action_for(analysis, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(analysis)
'to_be_verified'
Submission of results for analyses with dependencies
If an analysis is associated to a calculation that uses the result of other
analyses (dependents), then the analysis cannot be submitted unless all its
dependents were previously submitted.
Reset the interim fields for analysis Au:
>>> Au.setInterimFields([])
Prepare a calculation that depends on Cu and assign it to Fe analysis:
>>> calc_fe = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Au')
>>> interim_1 = {'keyword': 'IT1', 'title': 'Interim 1'}
>>> calc_au.setInterimFields([interim_1])
>>> calc_au.setFormula("([IT1]+[Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu_analysis = filter(lambda an: an.getKeyword()=="Cu", analyses)[0]
>>> fe_analysis = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> au_analysis = filter(lambda an: an.getKeyword()=="Au", analyses)[0]
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> fe_analysis.setResult(12)
>>> au_analysis.setResult(10)
Cannot submit Fe, because there is no result for Cu yet:
>>> transitioned = do_action_for(fe_analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(fe_analysis)
'unassigned'
And we cannot submit Au, because Cu, a dependency of Fe, has no result:
>>> transitioned = do_action_for(au_analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(au_analysis)
'unassigned'
Set a result for Cu and submit:
>>> cu_analysis.setResult(12)
>>> transitioned = do_action_for(cu_analysis, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(cu_analysis)
'to_be_verified'
But Fe won’t follow, cause only dependencies follow, but not dependents:
>>> api.get_workflow_status_of(fe_analysis)
'unassigned'
If we try to submit Au, the submission will not take place:
>>> transitioned = do_action_for(au_analysis, "submit")
>>> transitioned[0]
False
>>> api.get_workflow_status_of(au_analysis)
'unassigned'
Because of the missing interim. Set the interim for Au:
>>> au_analysis.setInterimValue("IT1", 4)
And now we are able to submit Au:
>>> transitioned = do_action_for(au_analysis, "submit")
>>> transitioned[0]
True
>>> api.get_workflow_status_of(au_analysis)
'to_be_verified'
And since Fe is a dependency of Au, Fe will be automatically transitioned:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
As well as the Analysis Request:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
Check permissions for Submit transition
Create an Analysis Request:
The status of the Analysis Request is sample_received:
>>> api.get_workflow_status_of(ar)
'sample_received'
And the status of the Analysis is unassigned:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['unassigned']
Set a result:
>>> analysis = analyses[0]
>>> analysis.setResult(23)
Exactly these roles can submit:
>>> get_roles_for_permission("senaite.core: Edit Results", analysis)
['Analyst', 'LabManager', 'Manager']
>>> get_roles_for_permission("senaite.core: Edit Field Results", analysis)
['LabManager', 'Manager', 'Sampler']
And these roles can view results:
>>> get_roles_for_permission("senaite.core: View Results", analysis)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'Publisher', 'RegulatoryInspector', 'Sampler', 'Verifier']
Current user can submit because has the LabManager role:
>>> isTransitionAllowed(analysis, "submit")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Authenticated', 'LabClerk', 'RegulatoryInspector'])
>>> isTransitionAllowed(analysis, "submit")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(analysis, "submit")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(analysis, "submit")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
Analysis unassign guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisUnassign
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from senaite.core.catalog import SAMPLE_CATALOG
>>> from senaite.core.catalog import WORKSHEET_CATALOG
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... return ar
>>> def to_new_worksheet_with_duplicate(ar):
... worksheet = api.create(portal.worksheets, "Worksheet")
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... worksheet.addDuplicateAnalyses(1)
... return worksheet
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Unassign transition and guard basic constraints
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> transitioned = do_action_for(ar, "receive")
The status of the analyses is unassigned:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['unassigned', 'unassigned', 'unassigned']
And the Analysis Request’ assigned state index is ‘unassigned’:
>>> query = dict(assigned_state='unassigned', UID=api.get_uid(ar))
>>> len(api.search(query, SAMPLE_CATALOG))
1
>>> query = dict(assigned_state='assigned', UID=api.get_uid(ar))
>>> len(api.search(query, SAMPLE_CATALOG))
0
Create a Worksheet and add the analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
>>> sorted((map(lambda an: an.getKeyword(), worksheet.getAnalyses())))
['Au', 'Cu', 'Fe']
>>> map(api.get_workflow_status_of, analyses)
['assigned', 'assigned', 'assigned']
The Analysis Request’ assigned state indexer is ‘assigned’:
>>> query = dict(assigned_state='unassigned', UID=api.get_uid(ar))
>>> len(api.search(query, SAMPLE_CATALOG))
0
>>> query = dict(assigned_state='assigned', UID=api.get_uid(ar))
>>> len(api.search(query, SAMPLE_CATALOG))
1
The worksheet has now 3 analyses assigned:
>>> worksheet.getNumberOfRegularAnalyses()
3
>>> worksheet.getNumberOfQCAnalyses()
0
And metadata gets updated accordingly:
>>> query = dict(UID=api.get_uid(worksheet))
>>> ws_brain = api.search(query, WORKSHEET_CATALOG)[0]
>>> ws_brain.getNumberOfRegularAnalyses
3
>>> ws_brain.getNumberOfQCAnalyses
0
>>> an_uids = sorted(map(api.get_uid, worksheet.getAnalyses()))
>>> sorted(ws_brain.getAnalysesUIDs) == an_uids
True
When we unassign the Cu analysis, the workseet gets updated:
>>> cu = filter(lambda an: an.getKeyword() == 'Cu', worksheet.getAnalyses())[0]
>>> succeed = do_action_for(cu, "unassign")
>>> api.get_workflow_status_of(cu)
'unassigned'
>>> cu in worksheet.getAnalyses()
False
>>> worksheet.getNumberOfRegularAnalyses()
2
>>> ws_brain = api.search(query, WORKSHEET_CATALOG)[0]
>>> ws_brain.getNumberOfRegularAnalyses
2
>>> api.get_uid(cu) in ws_brain.getAnalysesUIDs
False
>>> len(ws_brain.getAnalysesUIDs)
2
And the Analysis Request’ assigned state index is updated as well:
>>> query = dict(assigned_state='unassigned', UID=api.get_uid(ar))
>>> len(api.search(query, SAMPLE_CATALOG))
1
>>> query = dict(assigned_state='assigned', UID=api.get_uid(ar))
>>> len(api.search(query, SAMPLE_CATALOG))
0
Unassign of an analysis causes the duplicates to be removed
When the analysis a duplicate comes from is unassigned, the duplicate is
removed from the worksheet too.
Create a Worksheet and add the analyses:
>>> ar = new_ar([Cu])
>>> transitioned = do_action_for(ar, "receive")
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> api.get_workflow_status_of(worksheet)
'open'
>>> cu = ar.getAnalyses(full_objects=True)[0]
>>> dcu = worksheet.getDuplicateAnalyses()[0]
When the analysis Cu is unassigned, the duplicate is removed:
>>> dcu_uid = api.get_uid(dcu)
>>> try_transition(cu, "unassign", "unassigned")
True
>>> api.get_workflow_status_of(cu)
'unassigned'
>>> dcu_uid in worksheet.getDuplicateAnalyses()
False
>>> api.get_object_by_uid(dcu_uid, None) is None
True
Analysis verification guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowAnalysisVerify
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IVerified
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def submit_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Verify transition and guard basic constraints
Create an Analysis Request and submit results:
>>> ar = new_ar([Cu, Fe, Au])
>>> submit_analyses(ar)
The status of the Analysis Request and its analyses is to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['to_be_verified', 'to_be_verified', 'to_be_verified']
I cannot verify the analyses because I am the same user who submitted them:
>>> try_transition(analyses[0], "verify", "verified")
False
>>> api.get_workflow_status_of(analyses[0])
'to_be_verified'
>>> try_transition(analyses[1], "verify", "verified")
False
>>> api.get_workflow_status_of(analyses[1])
'to_be_verified'
>>> try_transition(analyses[2], "verify", "verified")
False
>>> api.get_workflow_status_of(analyses[2])
'to_be_verified'
And I cannot verify Analysis Request neither, because the Analysis Request can
only be verified once all the analyses it contains are verified (and this is
done automatically):
>>> try_transition(ar, "verify", "verified")
False
>>> api.get_workflow_status_of(ar)
'to_be_verified'
But if enable the self verification:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Then, I will be able to verify my own results:
>>> try_transition(analyses[0], "verify", "verified")
True
>>> try_transition(analyses[1], "verify", "verified")
True
But the Analysis Request will remain in to_be_verified state:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
Until we verify all the analyses it contains:
>>> try_transition(analyses[2], "verify", "verified")
True
>>> api.get_workflow_status_of(ar)
'verified'
And we cannot re-verify an analysis that has been verified already:
>>> try_transition(analyses[2], "verify", "verified")
False
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Auto verification of Worksheets when all its analyses are verified
The same behavior as for Analysis Requests applies to the worksheet when all its
analyses are verified.
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Create two Analysis Requests:
>>> ar0 = new_ar([Cu, Fe, Au])
>>> ar1 = new_ar([Cu, Fe])
Create a worksheet:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
And assign all the analyses from the Analysis Requests created before, except
Au from the first Analysis Request:
>>> analyses_ar0 = ar0.getAnalyses(full_objects=True)
>>> analyses_ar1 = ar1.getAnalyses(full_objects=True)
>>> analyses = filter(lambda an: an.getKeyword() != 'Au', analyses_ar0)
>>> analyses += analyses_ar1
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
And submit results for all analyses:
>>> submit_analyses(ar0)
>>> submit_analyses(ar1)
Of course I cannot verify the whole worksheet, because a worksheet can only be
verified once all the analyses it contains are in verified state (and this is
done automatically):
>>> try_transition(worksheet, "verify", "verified")
False
And verify all analyses from worksheet except one:
>>> ws_analyses = worksheet.getAnalyses()
>>> analysis_1 = analyses[0]
>>> analysis_2 = analyses[1]
>>> analysis_3 = analyses[2]
>>> analysis_4 = analyses[3]
>>> try_transition(analysis_2, "verify", "verified")
True
>>> try_transition(analysis_3, "verify", "verified")
True
>>> try_transition(analysis_4, "verify", "verified")
True
The Analysis Request number 1 has been automatically transitioned to verified
cause all the contained analyses have been verified:
>>> api.get_workflow_status_of(ar1)
'verified'
While Analysis Request number 0 has not been transitioned because have two
analyses to be verifed still:
>>> api.get_workflow_status_of(ar0)
'to_be_verified'
And same with worksheet, cause there is one analysis pending:
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And again, I cannot verify the whole worksheet by myself, because a worksheet
can only be verified once all the analyses it contains are in verified state
(and this is done automatically):
>>> try_transition(worksheet, "verify", "verified")
False
If we verify the pending analysis from the worksheet:
>>> try_transition(analysis_1, "verify", "verified")
True
The worksheet will follow:
>>> api.get_workflow_status_of(worksheet)
'verified'
But the Analysis Request number 0 will remain in to_be_verified state:
>>> api.get_workflow_status_of(ar0)
'to_be_verified'
Unless we verify the analysis Au:
>>> au_an = filter(lambda an: an.getKeyword() == 'Au', analyses_ar0)[0]
>>> try_transition(au_an, "verify", "verified")
True
>>> api.get_workflow_status_of(ar0)
'verified'
Verification of results for analyses with dependencies
If an analysis is associated to a calculation that uses the result of other
analyses (dependents), then the verification of a dependency will auto-verify
its dependents.
Reset the interim fields for analysis Au:
>>> Au.setInterimFields([])
Prepare a calculation that depends on Cu and assign it to Fe analysis:
>>> calc_fe = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Au')
>>> calc_au.setFormula("([Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu_analysis = filter(lambda an: an.getKeyword()=="Cu", analyses)[0]
>>> fe_analysis = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> au_analysis = filter(lambda an: an.getKeyword()=="Au", analyses)[0]
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> cu_analysis.setResult(20)
>>> fe_analysis.setResult(12)
>>> au_analysis.setResult(10)
Submit Au analysis and the rest will follow:
>>> try_transition(au_analysis, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(au_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(fe_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(cu_analysis)
'to_be_verified'
If I verify Au, the rest of analyses (dependents) will follow too:
>>> try_transition(au_analysis, "verify", "verified")
True
>>> api.get_workflow_status_of(au_analysis)
'verified'
>>> api.get_workflow_status_of(fe_analysis)
'verified'
>>> api.get_workflow_status_of(cu_analysis)
'verified'
And Analysis Request is transitioned too:
>>> api.get_workflow_status_of(ar)
'verified'
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Check permissions for Verify transition
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Create an Analysis Request and submit results:
>>> ar = new_ar([Cu])
>>> submit_analyses(ar)
The status of the Analysis Request and its analyses is to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> analyses = ar.getAnalyses(full_objects=True)
>>> map(api.get_workflow_status_of, analyses)
['to_be_verified']
Exactly these roles can verify:
>>> analysis = analyses[0]
>>> get_roles_for_permission("senaite.core: Transition: Verify", analysis)
['LabManager', 'Manager', 'Verifier']
Current user can verify because has the LabManager role:
>>> isTransitionAllowed(analysis, "verify")
True
Also if the user has the roles Manager or Verifier:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(analysis, "verify")
True
>>> setRoles(portal, TEST_USER_ID, ['Verifier',])
>>> isTransitionAllowed(analysis, "verify")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Analyst', 'Authenticated', 'LabClerk'])
>>> isTransitionAllowed(analysis, "verify")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(analysis, "verify")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(analysis, "verify")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
And to ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
IVerified interface is provided by verified analyses
When verified, routine analyses are marked with the IVerified interface:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> sample = new_ar([Cu])
>>> submit_analyses(sample)
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> IVerified.providedBy(analysis)
False
>>> success = do_action_for(analysis, "verify")
>>> IVerified.providedBy(analysis)
True
>>> bikasetup.setSelfVerificationEnabled(False)
Duplicate Analysis assign guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowDuplicateAnalysisAssign
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... return ar
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Assign transition and guard basic constraints
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> transitioned = do_action_for(ar, "receive")
>>> analyses = ar.getAnalyses(full_objects=True)
Create a Worksheet and add the analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
Add duplicate analyses from the analyses in position 1
>>> duplicates = worksheet.addDuplicateAnalyses(1)
>>> len(duplicates)
3
The status of the duplicates is assigned:
>>> duplicates = worksheet.getDuplicateAnalyses()
>>> map(api.get_workflow_status_of, duplicates)
['assigned', 'assigned', 'assigned']
And are associated to the worksheet:
>>> wuid = list(set(map(lambda dup: dup.getWorksheetUID(), duplicates)))
>>> len(wuid)
1
>>> wuid[0] == api.get_uid(worksheet)
True
Duplicates do not have an Analyst assigned, though:
>>> list(set(map(lambda dup: dup.getAnalyst(), duplicates)))
['']
If I assign a user to the Worksheet, same user will be assigned to analyses:
>>> worksheet.setAnalyst(TEST_USER_ID)
>>> worksheet.getAnalyst() == TEST_USER_ID
True
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, analyses)
[]
And to the duplicates as well:
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, duplicates)
[]
I can remove one of the duplicates from the Worksheet:
>>> duplicate = duplicates[0]
>>> dup_uid = api.get_uid(duplicate)
>>> worksheet.removeAnalysis(duplicate)
>>> len(worksheet.getDuplicateAnalyses())
2
And the removed duplicate no longer exists:
>>> api.get_object_by_uid(dup_uid, None) is None
True
We add again duplicates for same analyses from slot 1 to slot 2:
>>> dup_uids = map(api.get_uid, worksheet.getDuplicateAnalyses())
>>> duplicates = worksheet.addDuplicateAnalyses(1, 2)
Since there is only one duplicate analysis missing in slot 2 (that we removed
earlier), only one duplicate analysis is added:
>>> len(duplicates)
1
>>> len(worksheet.getDuplicateAnalyses())
3
>>> len(filter(lambda dup: dup in duplicates, worksheet.getDuplicateAnalyses()))
1
And since the worksheet has an Analyst already assigned, duplicates too:
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, duplicates)
[]
From assigned state I can do submit:
>>> duplicates = worksheet.getDuplicateAnalyses()
>>> map(api.get_workflow_status_of, duplicates)
['assigned', 'assigned', 'assigned']
>>> duplicates[0].setResult(20)
>>> duplicates[1].setResult(23)
>>> try_transition(duplicates[0], "submit", "to_be_verified")
True
>>> try_transition(duplicates[1], "submit", "to_be_verified")
True
And duplicates transition to to_be_verified:
>>> map(api.get_workflow_status_of, duplicates)
['to_be_verified', 'to_be_verified', 'assigned']
While keeping the Analyst that was assigned to the worksheet:
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, duplicates)
[]
And since there is still regular analyses in the Worksheet not yet submitted,
the Worksheet remains in open state:
>>> api.get_workflow_status_of(worksheet)
'open'
Duplicates get removed when I unassign the analyses they come from:
>>> duplicate = duplicates[0]
>>> analysis = duplicate.getAnalysis()
>>> dup_uid = api.get_uid(duplicate)
>>> an_uid = api.get_uid(analysis)
>>> worksheet.removeAnalysis(analysis)
>>> api.get_workflow_status_of(analysis)
'unassigned'
>>> filter(lambda an: api.get_uid(an) == an_uid, worksheet.getAnalyses())
[]
>>> filter(lambda dup: api.get_uid(dup.getAnalysis()) == an_uid, worksheet.getDuplicateAnalyses())
[]
>>> len(worksheet.getDuplicateAnalyses())
2
>>> api.get_object_by_uid(dup_uid, None) is None
True
I submit the results for the rest of analyses:
>>> for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(10)
... transitioned = do_action_for(analysis, "submit")
>>> map(api.get_workflow_status_of, worksheet.getRegularAnalyses())
['to_be_verified', 'to_be_verified']
And since there is a duplicate that has not been yet submitted, the Worksheet
remains in open state:
>>> duplicates = worksheet.getDuplicateAnalyses()
>>> duplicate = filter(lambda dup: api.get_workflow_status_of(dup) == "assigned", duplicates)
>>> len(duplicate)
1
>>> duplicate = duplicate[0]
>>> api.get_workflow_status_of(duplicate)
'assigned'
>>> api.get_workflow_status_of(worksheet)
'open'
But if I remove the duplicate analysis that has not been yet submitted, the
status of the Worksheet is promoted to to_be_verified, cause all the rest
are in to_be_verified state:
>>> dup_uid = api.get_uid(duplicate)
>>> worksheet.removeAnalysis(duplicate)
>>> len(worksheet.getDuplicateAnalyses())
1
>>> api.get_object_by_uid(dup_uid, None) is None
True
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And now, I cannot add duplicates anymore:
>>> worksheet.addDuplicateAnalyses(1)
[]
>>> len(worksheet.getDuplicateAnalyses())
1
Check permissions for Assign transition
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> transitioned = do_action_for(ar, "receive")
>>> analyses = ar.getAnalyses(full_objects=True)
Create a Worksheet and add the analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
Add a Duplicate analysis of the analysis in position 1:
>>> len(worksheet.addDuplicateAnalyses(1))
3
Since a duplicate can only live inside a Worksheet, the initial state of the
duplicate is assigned by default:
>>> duplicates = worksheet.getDuplicateAnalyses()
>>> map(api.get_workflow_status_of, duplicates)
['assigned', 'assigned', 'assigned']
Duplicate Analysis multi-verification guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowDuplicateAnalysisMultiVerify
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IVerified
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_duplicate(ar):
... worksheet = api.create(portal.worksheets, "Worksheet")
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... worksheet.addDuplicateAnalyses(1)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Multiverify not allowed if multi-verification is not enabled
Enable self verification:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
Get the duplicate and submit:
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setResult(12)
>>> try_transition(duplicate, "submit", "to_be_verified")
True
The status of duplicate and others is to_be_verified:
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
I cannot multi verify the analysis because multi-verification is not set:
>>> isTransitionAllowed(duplicate, "multi_verify")
False
>>> try_transition(duplicate, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
But I can verify:
>>> isTransitionAllowed(duplicate, "verify")
True
>>> try_transition(duplicate, "verify", "verified")
True
And the status of the duplicate is now verified:
>>> api.get_workflow_status_of(duplicate)
'verified'
While the rest remain in to_be_verified state because the regular analysis
hasn’t been verified yet:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Multiverify transition with multi-verification enabled
The system allows to set multiple verifiers, both at Setup or Analysis Service
level. If set, the analysis will transition to verified when the total number
of verifications equals to the value set in multiple-verifiers.
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Set the number of required verifications to 3:
>>> bikasetup.setNumberOfRequiredVerifications(3)
Set the multi-verification to “Not allow same user to verify multiple times”:
>>> bikasetup.setTypeOfmultiVerification('self_multi_disabled')
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
Get the duplicate and submit:
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setResult(12)
>>> try_transition(duplicate, "submit", "to_be_verified")
True
The status of duplicate and others is to_be_verified:
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
I cannot verify:
>>> isTransitionAllowed(duplicate, "verify")
False
>>> try_transition(duplicate, "verify", "verified")
False
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
Because multi-verification is enabled:
>>> bikasetup.getNumberOfRequiredVerifications()
3
And there are 3 verifications remaining:
>>> duplicate.getNumberOfRemainingVerifications()
3
But I can multi-verify:
>>> isTransitionAllowed(duplicate, "multi_verify")
True
>>> try_transition(duplicate, "multi_verify", "to_be_verified")
True
The status remains to to_be_verified:
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
And my user id is recorded as such:
>>> action = api.get_review_history(duplicate)[0]
>>> action['actor'] == TEST_USER_ID
True
And now, there are two verifications remaining:
>>> duplicate.getNumberOfRemainingVerifications()
2
So, I cannot verify yet:
>>> isTransitionAllowed(duplicate, "verify")
False
>>> try_transition(duplicate, "verify", "verified")
False
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
But I cannot multi-verify neither, cause I am the same user who did the last
multi-verification:
>>> isTransitionAllowed(duplicate, "multi_verify")
False
>>> try_transition(duplicate, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
And the system is configured to not allow same user to verify multiple times:
>>> bikasetup.getTypeOfmultiVerification()
'self_multi_disabled'
But I can multi-verify if I change the type of multi-verification:
>>> bikasetup.setTypeOfmultiVerification('self_multi_enabled')
>>> isTransitionAllowed(duplicate, "multi_verify")
True
>>> try_transition(duplicate, "multi_verify", "to_be_verified")
True
The status remains to to_be_verified:
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
Since there is only one verification remaining, I cannot multi-verify again:
>>> duplicate.getNumberOfRemainingVerifications()
1
>>> isTransitionAllowed(duplicate, "multi_verify")
False
>>> try_transition(duplicate, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
But now, I can verify:
>>> isTransitionAllowed(duplicate, "verify")
True
>>> try_transition(duplicate, "verify", "verified")
True
There is no verifications remaining:
>>> duplicate.getNumberOfRemainingVerifications()
0
And the status of the duplicate is now verified:
>>> api.get_workflow_status_of(duplicate)
'verified'
While the rest remain in to_be_verified state because the regular analysis
hasn’t been verified yet:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
If we multi-verify the regular analysis (2+1 times):
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> try_transition(analysis, "multi_verify", "to_be_verified")
True
>>> try_transition(analysis, "multi_verify", "to_be_verified")
True
>>> try_transition(analysis, "verify", "verified")
True
The rest transition to to_be_verified:
>>> api.get_workflow_status_of(ar)
'verified'
>>> api.get_workflow_status_of(worksheet)
'verified'
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Check permissions for Multi verify transition
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Set the number of required verifications to 3:
>>> bikasetup.setNumberOfRequiredVerifications(3)
Set the multi-verification to “Allow same user to verify multiple times”:
>>> bikasetup.setTypeOfmultiVerification('self_multi_enabled')
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
Get the duplicate and submit:
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setResult(12)
>>> try_transition(duplicate, "submit", "to_be_verified")
True
Exactly these roles can multi_verify:
>>> get_roles_for_permission("senaite.core: Transition: Verify", duplicate)
['LabManager', 'Manager', 'Verifier']
Current user can multi_verify because has the LabManager role:
>>> isTransitionAllowed(duplicate, "multi_verify")
True
Also if the user has the roles Manager or Verifier:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(duplicate, "multi_verify")
True
TODO Workflow Verifier should be able to multi_verify a duplicate!
The code below throws an
Unauthorized: Not authorized to access binding: context error, rised by
https://github.com/MatthewWilkes/Zope/blob/master/src/Shared/DC/Scripts/Bindings.py#L198
# >>> setRoles(portal, TEST_USER_ID, [‘Verifier’,])
# >>> isTransitionAllowed(duplicate, “multi_verify”)
# True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Analyst', 'Authenticated', 'LabClerk'])
>>> isTransitionAllowed(duplicate, "multi_verify")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(duplicate, "multi_verify")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(duplicate, "multi_verify")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
And to ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
IVerified interface is provided by fully verified duplicates
Duplicates that have not been fully verified do not provide IVerified:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.setNumberOfRequiredVerifications(2)
>>> bikasetup.setTypeOfmultiVerification("self_multi_enabled")
>>> sample = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_duplicate(sample)
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setResult(12)
>>> success = do_action_for(duplicate, "submit")
>>> IVerified.providedBy(duplicate)
False
>>> success = do_action_for(duplicate, "multi_verify")
>>> IVerified.providedBy(duplicate)
False
>>> success = do_action_for(duplicate, "verify")
>>> IVerified.providedBy(duplicate)
True
>>> bikasetup.setSelfVerificationEnabled(False)
Duplicate Analysis retract guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowDuplicateAnalysisRetract
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IRetracted
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_duplicate(ar):
... worksheet = api.create(portal.worksheets, "Worksheet")
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... worksheet.addDuplicateAnalyses(1)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def submit_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Retract transition and guard basic constraints
Create an Analysis Request and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
Get the duplicate and submit:
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setResult(12)
>>> try_transition(duplicate, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
Retract the duplicate:
>>> try_transition(duplicate, "retract", "retracted")
True
>>> api.get_workflow_status_of(duplicate)
'retracted'
And one new additional duplicate has been added in assigned state:
>>> duplicates = worksheet.getDuplicateAnalyses()
>>> sorted(map(api.get_workflow_status_of, duplicates))
['assigned', 'retracted']
And the Worksheet has been transitioned to open:
>>> api.get_workflow_status_of(worksheet)
'open'
While the Analysis Request is still in to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
The new analysis is a copy of retracted one:
>>> retest = filter(lambda an: api.get_workflow_status_of(an) == "assigned", duplicates)[0]
>>> retest.getKeyword() == duplicate.getKeyword()
True
>>> retest.getReferenceAnalysesGroupID() == duplicate.getReferenceAnalysesGroupID()
True
>>> retest.getRetestOf() == duplicate
True
>>> duplicate.getRetest() == retest
True
>>> retest.getAnalysis() == duplicate.getAnalysis()
True
And keeps the same results as the retracted one:
>>> retest.getResult() == duplicate.getResult()
True
And is located in the same slot as well:
>>> worksheet.get_slot_position_for(duplicate) == worksheet.get_slot_position_for(retest)
True
If I submit the result for the new duplicate:
>>> try_transition(retest, "submit", "to_be_verified")
True
The status of both the duplicate and the Worksheet is “to_be_verified”:
>>> api.get_workflow_status_of(retest)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And I can even retract the retest:
>>> try_transition(retest, "retract", "retracted")
True
>>> api.get_workflow_status_of(retest)
'retracted'
And one new additional duplicate has been added in assigned state:
>>> duplicates = worksheet.getDuplicateAnalyses()
>>> sorted(map(api.get_workflow_status_of, duplicates))
['assigned', 'retracted', 'retracted']
And the Worksheet has been transitioned to open:
>>> api.get_workflow_status_of(worksheet)
'open'
Auto-rollback of Worksheet on analysis retraction
When retracting an analysis from a Worksheet that is in “to_be_verified” state
causes the rollback of the worksheet to “open” state.
Create an Analysis Request and submit results:
>>> ar = new_ar([Cu, Fe, Au])
Create a new Worksheet, assign all analyses and submit:
>>> ws = api.create(portal.worksheets, "Worksheet")
>>> for analysis in ar.getAnalyses(full_objects=True):
... ws.addAnalysis(analysis)
>>> submit_analyses(ar)
The state for both the Analysis Request and Worksheet is “to_be_verified”:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(ws)
'to_be_verified'
Retract one analysis:
>>> analysis = ws.getAnalyses()[0]
>>> try_transition(analysis, "retract", "retracted")
True
A rollback of the state of Analysis Request and Worksheet takes place:
>>> api.get_workflow_status_of(ar)
'sample_received'
>>> api.get_workflow_status_of(ws)
'open'
And both contain an additional analysis:
>>> len(ar.getAnalyses())
4
>>> len(ws.getAnalyses())
4
The state of this additional analysis, the retest, is “assigned”:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> retest = filter(lambda an: api.get_workflow_status_of(an) == "assigned", analyses)[0]
>>> retest.getKeyword() == analysis.getKeyword()
True
>>> retest in ws.getAnalyses()
True
Retraction of results for analyses with dependents
When retracting an analysis other analyses depends on (dependents), then the
retraction of a dependency causes the auto-retraction of its dependents.
Prepare a calculation that depends on Cu`and assign it to `Fe analysis:
>>> calc_fe = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Au')
>>> calc_au.setFormula("([Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu_analysis = filter(lambda an: an.getKeyword()=="Cu", analyses)[0]
>>> fe_analysis = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> au_analysis = filter(lambda an: an.getKeyword()=="Au", analyses)[0]
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> cu_analysis.setResult(20)
>>> fe_analysis.setResult(12)
>>> au_analysis.setResult(10)
Submit Au analysis and the rest will follow:
>>> try_transition(au_analysis, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(au_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(fe_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(cu_analysis)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
If I retract Fe, Au analysis is retracted automatically too:
>>> try_transition(fe_analysis, "retract", "retracted")
True
>>> api.get_workflow_status_of(fe_analysis)
'retracted'
>>> api.get_workflow_status_of(au_analysis)
'retracted'
As well as Cu analysis (a dependency of Fe):
>>> api.get_workflow_status_of(cu_analysis)
'retracted'
Hence, three new analyses are generated in accordance:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> len(analyses)
6
>>> au_analyses = filter(lambda an: an.getKeyword()=="Au", analyses)
>>> sorted(map(api.get_workflow_status_of, au_analyses))
['retracted', 'unassigned']
>>> fe_analyses = filter(lambda an: an.getKeyword()=="Fe", analyses)
>>> sorted(map(api.get_workflow_status_of, fe_analyses))
['retracted', 'unassigned']
>>> fe_analyses = filter(lambda an: an.getKeyword()=="Cu", analyses)
>>> sorted(map(api.get_workflow_status_of, fe_analyses))
['retracted', 'unassigned']
And the current state of the Analysis Request is sample_received now:
>>> api.get_workflow_status_of(ar)
'sample_received'
IRetracted interface is provided by retracted duplicates
When retracted, duplicate analyses are marked with the IRetracted interface:
>>> sample = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_duplicate(sample)
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setResult(12)
>>> success = do_action_for(duplicate, "submit")
>>> IRetracted.providedBy(duplicate)
False
>>> success = do_action_for(duplicate, "retract")
>>> IRetracted.providedBy(duplicate)
True
But the retest does not provide IRetracted:
>>> retest = duplicate.getRetest()
>>> IRetracted.providedBy(retest)
False
Duplicate Analysis submission guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowDuplicateAnalysisSubmit
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_duplicate(ar):
... worksheet = api.create(portal.worksheets, "Worksheet")
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... worksheet.addDuplicateAnalyses(1)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Duplicate submission basic constraints
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu, Fe, Au])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
Get a duplicate:
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
Cannot submit a duplicate without a result:
>>> try_transition(duplicate, "submit", "to_be_verified")
False
Even if we try with an empty or None result:
>>> duplicate.setResult('')
>>> try_transition(duplicate, "submit", "to_be_verified")
False
>>> duplicate.setResult(None)
>>> try_transition(duplicate, "submit", "to_be_verified")
False
But will work if we try with a result of 0:
>>> duplicate.setResult(0)
>>> try_transition(duplicate, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
And we cannot re-submit a duplicate that have been submitted already:
>>> try_transition(duplicate, "submit", "to_be_verified")
False
Auto submission of a Worksheets when all its analyses are submitted
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
Create a worksheet:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
And assign all analyses from the Analysis Request created before:
>>> for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
Add a Duplicate of sample from position 1:
>>> duplicates = worksheet.addDuplicateAnalyses(1)
Set results and submit all analyses from the worksheet except the duplicates:
>>> for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... transitioned = do_action_for(analysis, "submit")
>>> map(api.get_workflow_status_of, worksheet.getRegularAnalyses())
['to_be_verified', 'to_be_verified', 'to_be_verified']
While the Analysis Request has been transitioned to to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
The worksheet has not been transitioned:
>>> api.get_workflow_status_of(worksheet)
'open'
Because duplicates are still in assigned state:
>>> map(api.get_workflow_status_of, worksheet.getDuplicateAnalyses())
['assigned', 'assigned', 'assigned']
If we set results and submit duplicates:
>>> for analysis in worksheet.getDuplicateAnalyses():
... analysis.setResult(13)
... transitioned = do_action_for(analysis, "submit")
>>> map(api.get_workflow_status_of, worksheet.getDuplicateAnalyses())
['to_be_verified', 'to_be_verified', 'to_be_verified']
The worksheet will automatically be submitted too:
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
Submission of duplicates with interim fields set
Set interims to the analysis Au:
>>> Au.setInterimFields([
... {"keyword": "interim_1", "title": "Interim 1",},
... {"keyword": "interim_2", "title": "Interim 2",}])
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Au])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
Get the duplicate:
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
Cannot submit if no result is set:
>>> try_transition(duplicate, "submit", "to_be_verified")
False
But even if we set a result, we cannot submit because interims are missing:
>>> duplicate.setResult(12)
>>> duplicate.getResult()
'12'
>>> try_transition(duplicate, "submit", "to_be_verified")
False
So, if the duplicate has interims defined, all them are required too:
>>> duplicate.setInterimValue("interim_1", 15)
>>> duplicate.getInterimValue("interim_1")
'15'
>>> duplicate.getInterimValue("interim_2")
''
>>> try_transition(duplicate, "submit", "to_be_verified")
False
Even if we set a non-valid (None, empty) value to an interim:
>>> duplicate.setInterimValue("interim_2", None)
>>> duplicate.getInterimValue("interim_2")
''
>>> try_transition(duplicate, "submit", "to_be_verified")
False
>>> duplicate.setInterimValue("interim_2", '')
>>> duplicate.getInterimValue("interim_2")
''
>>> try_transition(duplicate, "submit", "to_be_verified")
False
But it will work if the value is 0:
>>> duplicate.setInterimValue("interim_2", 0)
>>> duplicate.getInterimValue("interim_2")
'0'
>>> try_transition(duplicate, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
Might happen the other way round. We set interims but not a result:
>>> ar = new_ar([Au])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setInterimValue("interim_1", 10)
>>> duplicate.setInterimValue("interim_2", 20)
>>> try_transition(duplicate, "submit", "to_be_verified")
False
Still, the result is required:
>>> duplicate.setResult(12)
>>> try_transition(duplicate, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
Submission of duplicates with interim calculation
If a duplicate have a calculation assigned, the result will be calculated
automatically based on the calculation. If the calculation have interims set,
only those that do not have a default value set will be required.
Prepare the calculation and set the calculation to analysis Au:
>>> Au.setInterimFields([])
>>> calc = api.create(bikasetup.bika_calculations, 'Calculation', title='Test Calculation')
>>> interim_1 = {'keyword': 'IT1', 'title': 'Interim 1', 'value': 10}
>>> interim_2 = {'keyword': 'IT2', 'title': 'Interim 2', 'value': 2}
>>> interim_3 = {'keyword': 'IT3', 'title': 'Interim 3', 'value': ''}
>>> interim_4 = {'keyword': 'IT4', 'title': 'Interim 4', 'value': None}
>>> interim_5 = {'keyword': 'IT5', 'title': 'Interim 5'}
>>> interims = [interim_1, interim_2, interim_3, interim_4, interim_5]
>>> calc.setInterimFields(interims)
>>> calc.setFormula("[IT1]+[IT2]+[IT3]+[IT4]+[IT5]")
>>> Au.setCalculation(calc)
Create a Worksheet with duplicate:
>>> ar = new_ar([Au])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
Cannot submit if no result is set
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> try_transition(duplicate, "submit", "to_be_verified")
False
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> duplicate.setResult(34)
Set a value for interim IT5:
>>> duplicate.setInterimValue("IT5", 5)
Cannot transition because IT3 and IT4 have None/empty values as default:
>>> try_transition(duplicate, "submit", "to_be_verified")
False
Let’s set a value for those interims:
>>> duplicate.setInterimValue("IT3", 3)
>>> try_transition(duplicate, "submit", "to_be_verified")
False
>>> duplicate.setInterimValue("IT4", 4)
Since interims IT1 and IT2 have default values set, the analysis will submit:
>>> try_transition(duplicate, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
Submission of duplicates with dependencies
Duplicates with dependencies are not allowed. Duplicates can only be created
from analyses without dependents.
TODO We might consider to allow the creation of duplicates with deps
Reset the interim fields for analysis Au:
>>> Au.setInterimFields([])
Prepare a calculation that depends on Cu and assign it to Fe analysis:
>>> calc_fe = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Au')
>>> interim_1 = {'keyword': 'IT1', 'title': 'Interim 1'}
>>> calc_au.setInterimFields([interim_1])
>>> calc_au.setFormula("([IT1]+[Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
Create a Worksheet with duplicate:
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> analyses = worksheet.getRegularAnalyses()
Only one duplicate created for Cu, cause is the only analysis that does not
have dependents:
>>> duplicates = worksheet.getDuplicateAnalyses()
>>> len(duplicates) == 1
True
>>> duplicate = duplicates[0]
>>> duplicate.getKeyword()
'Cu'
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> duplicate.setResult(12)
Cannot submit routine Fe cause there is no result for routine analysis Cu
and the duplicate of Cu cannot be used as a dependent:
>>> fe_analysis = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> try_transition(fe_analysis, "submit", "to_be_verified")
False
Check permissions for Submit transition
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu, Fe, Au])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
Set a result:
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setResult(23)
Exactly these roles can submit:
>>> get_roles_for_permission("senaite.core: Edit Results", duplicate)
['Analyst', 'LabManager', 'Manager']
And these roles can view results:
>>> get_roles_for_permission("senaite.core: View Results", duplicate)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'RegulatoryInspector']
Current user can submit because has the LabManager role:
>>> isTransitionAllowed(duplicate, "submit")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Authenticated', 'LabClerk', 'RegulatoryInspector', 'Sampler'])
>>> isTransitionAllowed(duplicate, "submit")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(duplicate, "submit")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(duplicate, "submit")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
Duplicate Analysis verification guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowDuplicateAnalysisVerify
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IVerified
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_duplicate(ar):
... worksheet = api.create(portal.worksheets, "Worksheet")
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... worksheet.addDuplicateAnalyses(1)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Duplicate verification basic constraints
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
Get the duplicate and submit:
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setResult(12)
>>> try_transition(duplicate, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
I cannot verify the duplicate because I am the same user who submitted:
>>> try_transition(duplicate, "verify", "verified")
False
>>> api.get_workflow_status_of(duplicate)
'to_be_verified'
And I cannot verify the Worksheet, because it can only be verified once all
analyses it contains are verified (and this is done automatically):
>>> try_transition(worksheet, "verify", "verified")
False
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
But if I enable self-verification:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Then, I can verify my own result:
>>> try_transition(duplicate, "verify", "verified")
True
And the worksheet transitions to verified:
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And we cannot re-verify a duplicate that has been verified already:
>>> try_transition(duplicate, "verify", "verified")
False
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Check permissions for Verify transition
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_duplicate(ar)
>>> submit_regular_analyses(worksheet)
Get the duplicate and submit:
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setResult(12)
>>> try_transition(duplicate, "submit", "to_be_verified")
True
Exactly these roles can verify:
>>> get_roles_for_permission("senaite.core: Transition: Verify", duplicate)
['LabManager', 'Manager', 'Verifier']
Current user can verify because has the LabManager role:
>>> isTransitionAllowed(duplicate, "verify")
True
Also if the user has the roles Manager or Verifier:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(duplicate, "verify")
True
TODO Workflow Verifier should be able to verify a duplicate!
The code below throws an
Unauthorized: Not authorized to access binding: context error, rised by
https://github.com/MatthewWilkes/Zope/blob/master/src/Shared/DC/Scripts/Bindings.py#L198
# >>> setRoles(portal, TEST_USER_ID, [‘Verifier’,])
# >>> isTransitionAllowed(duplicate, “verify”)
# True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Analyst', 'Authenticated', 'LabClerk'])
>>> isTransitionAllowed(duplicate, "verify")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(duplicate, "verify")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(duplicate, "verify")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
And to ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
IVerified interface is provided by duplicate analysis that are verified
When verified, duplicate analyses are marked with the IVerified interface:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> sample = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_duplicate(sample)
>>> duplicate = worksheet.getDuplicateAnalyses()[0]
>>> duplicate.setResult(12)
>>> success = do_action_for(duplicate, "submit")
>>> IVerified.providedBy(duplicate)
False
>>> success = do_action_for(duplicate, "verify")
>>> IVerified.providedBy(duplicate)
True
>>> bikasetup.setSelfVerificationEnabled(False)
Reference Analysis (Blanks) assign guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowReferenceAnalysisBlankAssign
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... return ar
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> ref_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Blank definition", Blank=True)
>>> ref_refs = [{'uid': api.get_uid(Cu), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '0', 'min': '0', 'max': '0'},]
>>> ref_def.setReferenceResults(ref_refs)
>>> ref_sample = api.create(supplier, "ReferenceSample", title="Blank",
... ReferenceDefinition=ref_def,
... Blank=True, ExpiryDate=date_future,
... ReferenceResults=ref_refs)
Assign transition and guard basic constraints
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> transitioned = do_action_for(ar, "receive")
>>> analyses = ar.getAnalyses(full_objects=True)
Create a Worksheet and add the analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
Add a blank:
>>> ref_analyses = worksheet.addReferenceAnalyses(ref_sample, [Cu, Fe, Au])
>>> len(ref_analyses)
3
The status of the reference analyses is assigned:
>>> ref_analyses = worksheet.getReferenceAnalyses()
>>> map(api.get_workflow_status_of, ref_analyses)
['assigned', 'assigned', 'assigned']
All them are blanks:
>>> map(lambda ref: ref.getReferenceType(), ref_analyses)
['b', 'b', 'b']
And are associated to the worksheet:
>>> wuid = list(set(map(lambda ref: ref.getWorksheetUID(), ref_analyses)))
>>> len(wuid)
1
>>> wuid[0] == api.get_uid(worksheet)
True
Blanks do not have an Analyst assigned, though:
>>> list(set(map(lambda ref: ref.getAnalyst(), ref_analyses)))
['']
If I assign a user to the Worksheet, same user will be assigned to analyses:
>>> worksheet.setAnalyst(TEST_USER_ID)
>>> worksheet.getAnalyst() == TEST_USER_ID
True
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, analyses)
[]
And to the blanks as well:
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, ref_analyses)
[]
I can remove one of the blanks from the Worksheet:
>>> ref = ref_analyses[0]
>>> ref_uid = api.get_uid(ref)
>>> worksheet.removeAnalysis(ref)
>>> len(worksheet.getReferenceAnalyses())
2
And the removed blank no longer exists:
>>> api.get_object_by_uid(ref_uid, None) is None
True
From assigned state I can do submit:
>>> ref_analyses = worksheet.getReferenceAnalyses()
>>> map(api.get_workflow_status_of, ref_analyses)
['assigned', 'assigned']
>>> ref_analyses[0].setResult(20)
>>> try_transition(ref_analyses[0], "submit", "to_be_verified")
True
And blanks transition to to_be_verified:
>>> map(api.get_workflow_status_of, ref_analyses)
['to_be_verified', 'assigned']
While keeping the Analyst that was assigned to the worksheet:
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, ref_analyses)
[]
And since there is still regular analyses in the Worksheet not yet submitted,
the Worksheet remains in open state:
>>> api.get_workflow_status_of(worksheet)
'open'
I submit the results for the rest of analyses:
>>> for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(10)
... transitioned = do_action_for(analysis, "submit")
>>> map(api.get_workflow_status_of, worksheet.getRegularAnalyses())
['to_be_verified', 'to_be_verified', 'to_be_verified']
And since there is a blank that has not been yet submitted, the Worksheet
remains in open state:
>>> ref = worksheet.getReferenceAnalyses()[1]
>>> api.get_workflow_status_of(ref)
'assigned'
>>> api.get_workflow_status_of(worksheet)
'open'
But if I remove the blank that has not been yet submitted, the status of the
Worksheet is promoted to to_be_verified, cause all the rest are in
to_be_verified state:
>>> ref_uid = api.get_uid(ref)
>>> worksheet.removeAnalysis(ref)
>>> len(worksheet.getReferenceAnalyses())
1
>>> api.get_object_by_uid(ref_uid, None) is None
True
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And the blank itself no longer exists in the system:
>>> api.get_object_by_uid(ref_uid, None) == None
True
And now, I cannot add blanks anymore:
>>> worksheet.addReferenceAnalyses(ref_sample, [Cu, Fe, Au])
[]
>>> len(worksheet.getReferenceAnalyses())
1
Check permissions for Assign transition
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> transitioned = do_action_for(ar, "receive")
>>> analyses = ar.getAnalyses(full_objects=True)
Create a Worksheet and add the analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
Add blank analyses:
>>> len(worksheet.addReferenceAnalyses(ref_sample, [Cu, Fe, Au]))
3
Since a reference analysis can only live inside a Worksheet, the initial state
of the blank is assigned by default:
>>> duplicates = worksheet.getReferenceAnalyses()
>>> map(api.get_workflow_status_of, duplicates)
['assigned', 'assigned', 'assigned']
Reference Analysis (Blanks) retract guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowReferenceAnalysisBlankRetract
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IRetracted
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_reference(ar, reference):
... worksheet = api.create(portal.worksheets, "Worksheet")
... service_uids = list()
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... service_uids.append(analysis.getServiceUID())
... worksheet.addReferenceAnalyses(reference, service_uids)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def submit_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> blank_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Blank definition", Blank=True)
>>> blank_refs = [{'uid': api.get_uid(Cu), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '0', 'min': '0', 'max': '0'},]
>>> blank_def.setReferenceResults(blank_refs)
>>> blank_sample = api.create(supplier, "ReferenceSample", title="Blank",
... ReferenceDefinition=blank_def,
... Blank=True, ExpiryDate=date_future,
... ReferenceResults=blank_refs)
Blank retraction basic constraints
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> submit_regular_analyses(worksheet)
Get the blank and submit:
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setResult(0)
>>> try_transition(blank, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(blank)
'to_be_verified'
Retract the blank:
>>> try_transition(blank, "retract", "retracted")
True
>>> api.get_workflow_status_of(blank)
'retracted'
And one new additional blank has been added in assigned state:
>>> references = worksheet.getReferenceAnalyses()
>>> sorted(map(api.get_workflow_status_of, references))
['assigned', 'retracted']
And the Worksheet has been transitioned to open:
>>> api.get_workflow_status_of(worksheet)
'open'
While the Analysis Request is still in to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
The new blank is a copy of retracted one:
>>> retest = filter(lambda an: api.get_workflow_status_of(an) == "assigned", references)[0]
>>> retest.getKeyword() == blank.getKeyword()
True
>>> retest.getReferenceAnalysesGroupID() == blank.getReferenceAnalysesGroupID()
True
>>> retest.getRetestOf() == blank
True
>>> blank.getRetest() == retest
True
>>> retest.getAnalysisService() == blank.getAnalysisService()
True
And keeps the same results as the retracted one:
>>> retest.getResult() == blank.getResult()
True
And is located in the same slot as well:
>>> worksheet.get_slot_position_for(blank) == worksheet.get_slot_position_for(retest)
True
If I submit the result for the new blank:
>>> try_transition(retest, "submit", "to_be_verified")
True
The status of both the blank and the Worksheet is “to_be_verified”:
>>> api.get_workflow_status_of(retest)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And I can even retract the retest:
>>> try_transition(retest, "retract", "retracted")
True
>>> api.get_workflow_status_of(retest)
'retracted'
And one new additional blank has been added in assigned state:
>>> references = worksheet.getReferenceAnalyses()
>>> sorted(map(api.get_workflow_status_of, references))
['assigned', 'retracted', 'retracted']
And the Worksheet has been transitioned to open:
>>> api.get_workflow_status_of(worksheet)
'open'
Retract transition when a duplicate from same Reference Sample is added
When analyses from same Reference Sample are added in a worksheet, the
worksheet allocates different slots for them, although each of the slots keeps
the container the blank belongs to (in this case the same Reference Sample).
Hence, when retracting a reference analysis, the retest must be added in the
same position as the original, regardless of how many blanks from same
reference sample exist.
Further information: https://github.com/senaite/senaite.core/pull/1179
Create an Analysis Request:
>>> ar = new_ar([Cu])
>>> worksheet = api.create(portal.worksheets, "Worksheet")
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
Add same reference sample twice:
>>> blank_1 = worksheet.addReferenceAnalyses(blank_sample, [api.get_uid(Cu)])[0]
>>> blank_2 = worksheet.addReferenceAnalyses(blank_sample, [api.get_uid(Cu)])[0]
>>> blank_1 != blank_2
True
Get the reference analyses positions:
>>> blank_1_pos = worksheet.get_slot_position_for(blank_1)
>>> blank_1_pos
1
>>> blank_2_pos = worksheet.get_slot_position_for(blank_2)
>>> blank_2_pos
2
Submit both:
>>> blank_1.setResult(12)
>>> blank_2.setResult(13)
>>> try_transition(blank_1, "submit", "to_be_verified")
True
>>> try_transition(blank_2, "submit", "to_be_verified")
True
Retract the first blank. The retest has been added in same slot:
>>> try_transition(blank_1, "retract", "retracted")
True
>>> retest_1 = blank_1.getRetest()
>>> worksheet.get_slot_position_for(retest_1)
1
And the same if we retract the second blank analysis:
>>> try_transition(blank_2, "retract", "retracted")
True
>>> retest_2 = blank_2.getRetest()
>>> worksheet.get_slot_position_for(retest_2)
2
IRetracted interface is provided by retracted blanks
When retracted, blank analyses are marked with the IRetracted interface:
>>> sample = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(sample, blank_sample)
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setResult(12)
>>> success = do_action_for(blank, "submit")
>>> IRetracted.providedBy(blank)
False
>>> success = do_action_for(blank, "retract")
>>> IRetracted.providedBy(blank)
True
But the retest does not provide IRetracted:
>>> retest = blank.getRetest()
>>> IRetracted.providedBy(retest)
False
Reference Analysis (Blank) multi-verification guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowReferenceAnalysisBlankMultiVerify
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IVerified
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_reference(ar, reference):
... worksheet = api.create(portal.worksheets, "Worksheet")
... service_uids = list()
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... service_uids.append(analysis.getServiceUID())
... worksheet.addReferenceAnalyses(reference, service_uids)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> blank_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Blank definition", Blank=True)
>>> blank_refs = [{'uid': api.get_uid(Cu), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '0', 'min': '0', 'max': '0'},]
>>> blank_def.setReferenceResults(blank_refs)
>>> blank_sample = api.create(supplier, "ReferenceSample", title="Blank",
... ReferenceDefinition=blank_def,
... Blank=True, ExpiryDate=date_future,
... ReferenceResults=blank_refs)
Multiverify not allowed if multi-verification is not enabled
Enable self verification:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> submit_regular_analyses(worksheet)
Get the blank and submit:
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setResult(0)
>>> try_transition(blank, "submit", "to_be_verified")
True
The status of blank and others is to_be_verified:
>>> api.get_workflow_status_of(blank)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
I cannot multi verify the blank because multi-verification is not set:
>>> isTransitionAllowed(blank, "multi_verify")
False
>>> try_transition(blank, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(blank)
'to_be_verified'
But I can verify:
>>> isTransitionAllowed(blank, "verify")
True
>>> try_transition(blank, "verify", "verified")
True
And the status of the blank is now verified:
>>> api.get_workflow_status_of(blank)
'verified'
While the rest remain in to_be_verified state because the regular analysis
hasn’t been verified yet:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Multiverify transition with multi-verification enabled
The system allows to set multiple verifiers, both at Setup or Analysis Service
level. If set, the blank will transition to verified when the total number
of verifications equals to the value set in multiple-verifiers.
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Set the number of required verifications to 3:
>>> bikasetup.setNumberOfRequiredVerifications(3)
Set the multi-verification to “Not allow same user to verify multiple times”:
>>> bikasetup.setTypeOfmultiVerification('self_multi_disabled')
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> submit_regular_analyses(worksheet)
Get the blank and submit:
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setResult(12)
>>> try_transition(blank, "submit", "to_be_verified")
True
The status of blank and others is to_be_verified:
>>> api.get_workflow_status_of(blank)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
I cannot verify:
>>> isTransitionAllowed(blank, "verify")
False
>>> try_transition(blank, "verify", "verified")
False
>>> api.get_workflow_status_of(blank)
'to_be_verified'
Because multi-verification is enabled:
>>> bikasetup.getNumberOfRequiredVerifications()
3
And there are 3 verifications remaining:
>>> blank.getNumberOfRemainingVerifications()
3
But I can multi-verify:
>>> isTransitionAllowed(blank, "multi_verify")
True
>>> try_transition(blank, "multi_verify", "to_be_verified")
True
The status remains to to_be_verified:
>>> api.get_workflow_status_of(blank)
'to_be_verified'
And my user id is recorded as such:
>>> action = api.get_review_history(blank)[0]
>>> action['actor'] == TEST_USER_ID
True
And now, there are two verifications remaining:
>>> blank.getNumberOfRemainingVerifications()
2
So, I cannot verify yet:
>>> isTransitionAllowed(blank, "verify")
False
>>> try_transition(blank, "verify", "verified")
False
>>> api.get_workflow_status_of(blank)
'to_be_verified'
But I cannot multi-verify neither, cause I am the same user who did the last
multi-verification:
>>> isTransitionAllowed(blank, "multi_verify")
False
>>> try_transition(blank, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(blank)
'to_be_verified'
And the system is configured to not allow same user to verify multiple times:
>>> bikasetup.getTypeOfmultiVerification()
'self_multi_disabled'
But I can multi-verify if I change the type of multi-verification:
>>> bikasetup.setTypeOfmultiVerification('self_multi_enabled')
>>> isTransitionAllowed(blank, "multi_verify")
True
>>> try_transition(blank, "multi_verify", "to_be_verified")
True
The status remains to to_be_verified:
>>> api.get_workflow_status_of(blank)
'to_be_verified'
Since there is only one verification remaining, I cannot multi-verify again:
>>> blank.getNumberOfRemainingVerifications()
1
>>> isTransitionAllowed(blank, "multi_verify")
False
>>> try_transition(blank, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(blank)
'to_be_verified'
But now, I can verify:
>>> isTransitionAllowed(blank, "verify")
True
>>> try_transition(blank, "verify", "verified")
True
There is no verifications remaining:
>>> blank.getNumberOfRemainingVerifications()
0
And the status of the blank is now verified:
>>> api.get_workflow_status_of(blank)
'verified'
While the rest remain in to_be_verified state because the regular analysis
hasn’t been verified yet:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
If we multi-verify the regular analysis (2+1 times):
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> try_transition(analysis, "multi_verify", "to_be_verified")
True
>>> try_transition(analysis, "multi_verify", "to_be_verified")
True
>>> try_transition(analysis, "verify", "verified")
True
The rest transition to to_be_verified:
>>> api.get_workflow_status_of(ar)
'verified'
>>> api.get_workflow_status_of(worksheet)
'verified'
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Check permissions for Multi verify transition
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Set the number of required verifications to 3:
>>> bikasetup.setNumberOfRequiredVerifications(3)
Set the multi-verification to “Allow same user to verify multiple times”:
>>> bikasetup.setTypeOfmultiVerification('self_multi_enabled')
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> submit_regular_analyses(worksheet)
Get the blank and submit:
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setResult(12)
>>> try_transition(blank, "submit", "to_be_verified")
True
Exactly these roles can multi_verify:
>>> get_roles_for_permission("senaite.core: Transition: Verify", blank)
['LabManager', 'Manager', 'Verifier']
Current user can multi_verify because has the LabManager role:
>>> isTransitionAllowed(blank, "multi_verify")
True
Also if the user has the roles Manager or Verifier:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(blank, "multi_verify")
True
>>> setRoles(portal, TEST_USER_ID, ['Verifier',])
>>> isTransitionAllowed(blank, "multi_verify")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Analyst', 'Authenticated', 'LabClerk'])
>>> isTransitionAllowed(blank, "multi_verify")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(blank, "multi_verify")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(blank, "multi_verify")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
And to ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
IVerified interface is provided by fully verified blanks
Blanks do not provide IVerified unless fully verified:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.setNumberOfRequiredVerifications(2)
>>> bikasetup.setTypeOfmultiVerification("self_multi_enabled")
>>> sample = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(sample, blank_sample)
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setResult(0)
>>> success = do_action_for(blank, "submit")
>>> IVerified.providedBy(blank)
False
>>> success = do_action_for(blank, "multi_verify")
>>> IVerified.providedBy(blank)
False
>>> success = do_action_for(blank, "verify")
>>> IVerified.providedBy(blank)
True
>>> bikasetup.setSelfVerificationEnabled(False)
Reference Analysis (Blanks) submission guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowReferenceAnalysisBlankSubmit
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_reference(ar, reference):
... worksheet = api.create(portal.worksheets, "Worksheet")
... service_uids = list()
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... service_uids.append(analysis.getServiceUID())
... worksheet.addReferenceAnalyses(reference, service_uids)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> blank_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Blank definition", Blank=True)
>>> blank_refs = [{'uid': api.get_uid(Cu), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '0', 'min': '0', 'max': '0'},]
>>> blank_def.setReferenceResults(blank_refs)
>>> control_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Control definition")
>>> control_refs = [{'uid': api.get_uid(Cu), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '15', 'min': '14.5', 'max': '15.5'},]
>>> control_def.setReferenceResults(control_refs)
>>> blank_sample = api.create(supplier, "ReferenceSample", title="Blank",
... ReferenceDefinition=blank_def,
... Blank=True, ExpiryDate=date_future,
... ReferenceResults=blank_refs)
>>> control_sample = api.create(supplier, "ReferenceSample", title="Control",
... ReferenceDefinition=control_def,
... Blank=False, ExpiryDate=date_future,
... ReferenceResults=control_refs)
Blank submission basic constraints
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu, Fe, Au])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> submit_regular_analyses(worksheet)
Get blank analyses:
>>> blanks = worksheet.getReferenceAnalyses()
>>> blank_1 = blanks[0]
>>> blank_2 = blanks[1]
>>> blank_3 = blanks[2]
Cannot submit a blank without a result:
>>> try_transition(blank_1, "submit", "to_be_verified")
False
Even if we try with an empty or None result:
>>> blank_1.setResult('')
>>> try_transition(blank_1, "submit", "to_be_verified")
False
>>> blank_1.setResult(None)
>>> try_transition(blank_1, "submit", "to_be_verified")
False
But will work if we try with a result of 0:
>>> blank_1.setResult(0)
>>> try_transition(blank_1, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(blank_1)
'to_be_verified'
And we cannot re-submit a blank that have been submitted already:
>>> try_transition(blank_1, "submit", "to_be_verified")
False
Auto submission of a Worksheets when all its analyses are submitted
Create a Worksheet:
>>> ar = new_ar([Cu, Fe, Au])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
Set results and submit all analyses from the worksheet except blanks:
>>> for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... transitioned = do_action_for(analysis, "submit")
>>> map(api.get_workflow_status_of, worksheet.getRegularAnalyses())
['to_be_verified', 'to_be_verified', 'to_be_verified']
While the Analysis Request has been transitioned to to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
The worksheet has not been transitioned:
>>> api.get_workflow_status_of(worksheet)
'open'
Because blanks are still in assigned state:
>>> map(api.get_workflow_status_of, worksheet.getReferenceAnalyses())
['assigned', 'assigned', 'assigned']
If we set results and submit blanks:
>>> for analysis in worksheet.getReferenceAnalyses():
... analysis.setResult(0)
... transitioned = do_action_for(analysis, "submit")
>>> map(api.get_workflow_status_of, worksheet.getReferenceAnalyses())
['to_be_verified', 'to_be_verified', 'to_be_verified']
The worksheet will automatically be submitted too:
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
Submission of blanks with interim fields set
Set interims to the analysis Au:
>>> Au.setInterimFields([
... {"keyword": "interim_1", "title": "Interim 1",},
... {"keyword": "interim_2", "title": "Interim 2",}])
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Au])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> submit_regular_analyses(worksheet)
Get blank analyses:
>>> blank = worksheet.getReferenceAnalyses()[0]
Cannot submit if no result is set:
>>> try_transition(blank, "submit", "to_be_verified")
False
But even if we set a result, we cannot submit because interims are missing:
>>> blank.setResult(12)
>>> blank.getResult()
'12'
>>> try_transition(blank, "submit", "to_be_verified")
False
So, if the blank has interims defined, all them are required too:
>>> blank.setInterimValue("interim_1", 15)
>>> blank.getInterimValue("interim_1")
'15'
>>> blank.getInterimValue("interim_2")
''
>>> try_transition(blank, "submit", "to_be_verified")
False
Even if we set a non-valid (None, empty) value to an interim:
>>> blank.setInterimValue("interim_2", None)
>>> blank.getInterimValue("interim_2")
''
>>> try_transition(blank, "submit", "to_be_verified")
False
>>> blank.setInterimValue("interim_2", '')
>>> blank.getInterimValue("interim_2")
''
>>> try_transition(blank, "submit", "to_be_verified")
False
But it will work if the value is 0:
>>> blank.setInterimValue("interim_2", 0)
>>> blank.getInterimValue("interim_2")
'0'
>>> try_transition(blank, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(blank)
'to_be_verified'
Might happen the other way round. We set interims but not a result:
>>> ar = new_ar([Au])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> submit_regular_analyses(worksheet)
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setInterimValue("interim_1", 10)
>>> blank.setInterimValue("interim_2", 20)
>>> try_transition(blank, "submit", "to_be_verified")
False
Still, the result is required:
>>> blank.setResult(12)
>>> try_transition(blank, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(blank)
'to_be_verified'
Submission of blank analysis with interim calculation
If a blank analysis have a calculation assigned, the result will be calculated
automatically based on the calculation. If the calculation have interims set,
only those that do not have a default value set will be required.
Prepare the calculation and set the calculation to analysis Au:
>>> Au.setInterimFields([])
>>> calc = api.create(bikasetup.bika_calculations, 'Calculation', title='Test Calculation')
>>> interim_1 = {'keyword': 'IT1', 'title': 'Interim 1', 'value': 10}
>>> interim_2 = {'keyword': 'IT2', 'title': 'Interim 2', 'value': 2}
>>> interim_3 = {'keyword': 'IT3', 'title': 'Interim 3', 'value': ''}
>>> interim_4 = {'keyword': 'IT4', 'title': 'Interim 4', 'value': None}
>>> interim_5 = {'keyword': 'IT5', 'title': 'Interim 5'}
>>> interims = [interim_1, interim_2, interim_3, interim_4, interim_5]
>>> calc.setInterimFields(interims)
>>> calc.setFormula("[IT1]+[IT2]+[IT3]+[IT4]+[IT5]")
>>> Au.setCalculation(calc)
Create a Worksheet with blank:
>>> ar = new_ar([Au])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
Cannot submit if no result is set
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> try_transition(blank, "submit", "to_be_verified")
False
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
Set a value for interim IT5:
>>> blank.setInterimValue("IT5", 5)
Cannot transition because IT3 and IT4 have None/empty values as default:
>>> try_transition(blank, "submit", "to_be_verified")
False
Let’s set a value for those interims:
>>> blank.setInterimValue("IT3", 3)
>>> try_transition(blank, "submit", "to_be_verified")
False
>>> blank.setInterimValue("IT4", 4)
Since interims IT1 and IT2 have default values set, the analysis will submit:
>>> try_transition(blank, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(blank)
'to_be_verified'
Submission of blanks with dependencies
Blanks with dependencies are not allowed. Blanks can only be created
from analyses without dependents.
TODO Might we consider to allow the creation of blanks with dependencies?
Reset the interim fields for analysis Au:
>>> Au.setInterimFields([])
Prepare a calculation that depends on Cu and assign it to Fe analysis:
>>> calc_fe = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Au')
>>> interim_1 = {'keyword': 'IT1', 'title': 'Interim 1'}
>>> calc_au.setInterimFields([interim_1])
>>> calc_au.setFormula("([IT1]+[Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
Create a Worksheet with blank:
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> analyses = worksheet.getRegularAnalyses()
Only one blank created for Cu, cause is the only analysis that does not
have dependents:
>>> blanks = worksheet.getReferenceAnalyses()
>>> len(blanks) == 1
True
>>> blank = blanks[0]
>>> blank.getKeyword()
'Cu'
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
Cannot submit routine Fe cause there is no result for routine analysis Cu
and the blank of Cu cannot be used as a dependent:
>>> fe_analysis = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> try_transition(fe_analysis, "submit", "to_be_verified")
False
Check permissions for Submit transition
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> submit_regular_analyses(worksheet)
Set a result:
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setResult(23)
Exactly these roles can submit:
>>> get_roles_for_permission("senaite.core: Edit Results", blank)
['Analyst', 'LabManager', 'Manager']
And these roles can view results:
>>> get_roles_for_permission("senaite.core: View Results", blank)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'RegulatoryInspector']
Current user can submit because has the LabManager role:
>>> isTransitionAllowed(blank, "submit")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Authenticated', 'LabClerk', 'RegulatoryInspector', 'Sampler'])
>>> isTransitionAllowed(blank, "submit")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(blank, "submit")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(blank, "submit")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
Reference Analysis (Blanks) verification guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowReferenceAnalysisBlankVerify
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IVerified
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_reference(ar, reference):
... worksheet = api.create(portal.worksheets, "Worksheet")
... service_uids = list()
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... service_uids.append(analysis.getServiceUID())
... worksheet.addReferenceAnalyses(reference, service_uids)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> blank_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Blank definition", Blank=True)
>>> blank_refs = [{'uid': api.get_uid(Cu), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '0', 'min': '0', 'max': '0'},]
>>> blank_def.setReferenceResults(blank_refs)
>>> blank_sample = api.create(supplier, "ReferenceSample", title="Blank",
... ReferenceDefinition=blank_def,
... Blank=True, ExpiryDate=date_future,
... ReferenceResults=blank_refs)
Blank verification basic constraints
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> submit_regular_analyses(worksheet)
Get the blank and submit:
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setResult(0)
>>> try_transition(blank, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(blank)
'to_be_verified'
I cannot verify the blank because I am the same user who submitted:
>>> try_transition(blank, "verify", "verified")
False
>>> api.get_workflow_status_of(blank)
'to_be_verified'
And I cannot verify the Worksheet, because it can only be verified once all
analyses it contains are verified (and this is done automatically):
>>> try_transition(worksheet, "verify", "verified")
False
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
But if I enable self-verification:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Then, I can verify my own result:
>>> try_transition(blank, "verify", "verified")
True
And the worksheet transitions to verified:
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And we cannot re-verify a blank that has been verified already:
>>> try_transition(blank, "verify", "verified")
False
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Check permissions for Verify transition
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, blank_sample)
>>> submit_regular_analyses(worksheet)
Get the blank and submit:
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setResult(12)
>>> try_transition(blank, "submit", "to_be_verified")
True
Exactly these roles can verify:
>>> get_roles_for_permission("senaite.core: Transition: Verify", blank)
['LabManager', 'Manager', 'Verifier']
Current user can verify because has the LabManager role:
>>> isTransitionAllowed(blank, "verify")
True
Also if the user has the roles Manager or Verifier:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(blank, "verify")
True
>>> setRoles(portal, TEST_USER_ID, ['Verifier',])
>>> isTransitionAllowed(blank, "verify")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Analyst', 'Authenticated', 'LabClerk'])
>>> isTransitionAllowed(blank, "verify")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(blank, "verify")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(blank, "verify")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
And to ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
IVerified interface is provided by verified blanks
When verified, blank analyses are marked with the IVerified interface:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> sample = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(sample, blank_sample)
>>> blank = worksheet.getReferenceAnalyses()[0]
>>> blank.setResult(0)
>>> success = do_action_for(blank, "submit")
>>> IVerified.providedBy(blank)
False
>>> success = do_action_for(blank, "verify")
>>> IVerified.providedBy(blank)
True
>>> bikasetup.setSelfVerificationEnabled(False)
Reference Analysis (Controls) assign guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowReferenceAnalysisControlAssign
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... return ar
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> ref_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="control definition", control=True)
>>> ref_refs = [{'uid': api.get_uid(Cu), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '0', 'min': '0', 'max': '0'},]
>>> ref_def.setReferenceResults(ref_refs)
>>> ref_sample = api.create(supplier, "ReferenceSample", title="control",
... ReferenceDefinition=ref_def,
... control=True, ExpiryDate=date_future,
... ReferenceResults=ref_refs)
Assign transition and guard basic constraints
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> transitioned = do_action_for(ar, "receive")
>>> analyses = ar.getAnalyses(full_objects=True)
Create a Worksheet and add the analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
Add a control:
>>> ref_analyses = worksheet.addReferenceAnalyses(ref_sample, [Cu, Fe, Au])
>>> len(ref_analyses)
3
The status of the reference analyses is assigned:
>>> ref_analyses = worksheet.getReferenceAnalyses()
>>> map(api.get_workflow_status_of, ref_analyses)
['assigned', 'assigned', 'assigned']
All them are controls:
>>> map(lambda ref: ref.getReferenceType(), ref_analyses)
['c', 'c', 'c']
And are associated to the worksheet:
>>> wuid = list(set(map(lambda ref: ref.getWorksheetUID(), ref_analyses)))
>>> len(wuid)
1
>>> wuid[0] == api.get_uid(worksheet)
True
controls do not have an Analyst assigned, though:
>>> list(set(map(lambda ref: ref.getAnalyst(), ref_analyses)))
['']
If I assign a user to the Worksheet, same user will be assigned to analyses:
>>> worksheet.setAnalyst(TEST_USER_ID)
>>> worksheet.getAnalyst() == TEST_USER_ID
True
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, analyses)
[]
And to the controls as well:
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, ref_analyses)
[]
I can remove one of the controls from the Worksheet:
>>> ref = ref_analyses[0]
>>> ref_uid = api.get_uid(ref)
>>> worksheet.removeAnalysis(ref)
>>> len(worksheet.getReferenceAnalyses())
2
And the removed control no longer exists:
>>> api.get_object_by_uid(ref_uid, None) is None
True
From assigned state I can do submit:
>>> ref_analyses = worksheet.getReferenceAnalyses()
>>> map(api.get_workflow_status_of, ref_analyses)
['assigned', 'assigned']
>>> ref_analyses[0].setResult(20)
>>> try_transition(ref_analyses[0], "submit", "to_be_verified")
True
And controls transition to to_be_verified:
>>> map(api.get_workflow_status_of, ref_analyses)
['to_be_verified', 'assigned']
While keeping the Analyst that was assigned to the worksheet:
>>> filter(lambda an: an.getAnalyst() != TEST_USER_ID, ref_analyses)
[]
And since there is still regular analyses in the Worksheet not yet submitted,
the Worksheet remains in open state:
>>> api.get_workflow_status_of(worksheet)
'open'
I submit the results for the rest of analyses:
>>> for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(10)
... transitioned = do_action_for(analysis, "submit")
>>> map(api.get_workflow_status_of, worksheet.getRegularAnalyses())
['to_be_verified', 'to_be_verified', 'to_be_verified']
And since there is a control that has not been yet submitted, the Worksheet
remains in open state:
>>> ref = worksheet.getReferenceAnalyses()[1]
>>> api.get_workflow_status_of(ref)
'assigned'
>>> api.get_workflow_status_of(worksheet)
'open'
But if I remove the control that has not been yet submitted, the status of the
Worksheet is promoted to to_be_verified, cause all the rest are in
to_be_verified state:
>>> ref_uid = api.get_uid(ref)
>>> worksheet.removeAnalysis(ref)
>>> len(worksheet.getReferenceAnalyses())
1
>>> api.get_object_by_uid(ref_uid, None) is None
True
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And the control itself no longer exists in the system:
>>> api.get_object_by_uid(ref_uid, None) == None
True
And now, I cannot add controls anymore:
>>> worksheet.addReferenceAnalyses(ref_sample, [Cu, Fe, Au])
[]
>>> len(worksheet.getReferenceAnalyses())
1
Check permissions for Assign transition
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
>>> transitioned = do_action_for(ar, "receive")
>>> analyses = ar.getAnalyses(full_objects=True)
Create a Worksheet and add the analyses:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> for analysis in analyses:
... worksheet.addAnalysis(analysis)
Add control analyses:
>>> len(worksheet.addReferenceAnalyses(ref_sample, [Cu, Fe, Au]))
3
Since a reference analysis can only live inside a Worksheet, the initial state
of the control is assigned by default:
>>> duplicates = worksheet.getReferenceAnalyses()
>>> map(api.get_workflow_status_of, duplicates)
['assigned', 'assigned', 'assigned']
Reference Analysis (Control) multi-verification guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowReferenceAnalysisControlMultiVerify
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IVerified
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_reference(ar, reference):
... worksheet = api.create(portal.worksheets, "Worksheet")
... service_uids = list()
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... service_uids.append(analysis.getServiceUID())
... worksheet.addReferenceAnalyses(reference, service_uids)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> control_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Control definition")
>>> control_refs = [{'uid': api.get_uid(Cu), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '15', 'min': '14.5', 'max': '15.5'},]
>>> control_def.setReferenceResults(control_refs)
>>> control_sample = api.create(supplier, "ReferenceSample", title="Control",
... ReferenceDefinition=control_def,
... Blank=False, ExpiryDate=date_future,
... ReferenceResults=control_refs)
Multiverify not allowed if multi-verification is not enabled
Enable self verification:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> submit_regular_analyses(worksheet)
Get the control and submit:
>>> control = worksheet.getReferenceAnalyses()[0]
>>> control.setResult(0)
>>> try_transition(control, "submit", "to_be_verified")
True
The status of control and others is to_be_verified:
>>> api.get_workflow_status_of(control)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
I cannot multi verify the control because multi-verification is not set:
>>> isTransitionAllowed(control, "multi_verify")
False
>>> try_transition(control, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(control)
'to_be_verified'
But I can verify:
>>> isTransitionAllowed(control, "verify")
True
>>> try_transition(control, "verify", "verified")
True
And the status of the control is now verified:
>>> api.get_workflow_status_of(control)
'verified'
While the rest remain in to_be_verified state because the regular analysis
hasn’t been verified yet:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Multiverify transition with multi-verification enabled
The system allows to set multiple verifiers, both at Setup or Analysis Service
level. If set, the control will transition to verified when the total number
of verifications equals to the value set in multiple-verifiers.
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Set the number of required verifications to 3:
>>> bikasetup.setNumberOfRequiredVerifications(3)
Set the multi-verification to “Not allow same user to verify multiple times”:
>>> bikasetup.setTypeOfmultiVerification('self_multi_disabled')
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> submit_regular_analyses(worksheet)
Get the control and submit:
>>> control = worksheet.getReferenceAnalyses()[0]
>>> control.setResult(12)
>>> try_transition(control, "submit", "to_be_verified")
True
The status of control and others is to_be_verified:
>>> api.get_workflow_status_of(control)
'to_be_verified'
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
I cannot verify:
>>> isTransitionAllowed(control, "verify")
False
>>> try_transition(control, "verify", "verified")
False
>>> api.get_workflow_status_of(control)
'to_be_verified'
Because multi-verification is enabled:
>>> bikasetup.getNumberOfRequiredVerifications()
3
And there are 3 verifications remaining:
>>> control.getNumberOfRemainingVerifications()
3
But I can multi-verify:
>>> isTransitionAllowed(control, "multi_verify")
True
>>> try_transition(control, "multi_verify", "to_be_verified")
True
The status remains to to_be_verified:
>>> api.get_workflow_status_of(control)
'to_be_verified'
And my user id is recorded as such:
>>> action = api.get_review_history(control)[0]
>>> action['actor'] == TEST_USER_ID
True
And now, there are two verifications remaining:
>>> control.getNumberOfRemainingVerifications()
2
So, I cannot verify yet:
>>> isTransitionAllowed(control, "verify")
False
>>> try_transition(control, "verify", "verified")
False
>>> api.get_workflow_status_of(control)
'to_be_verified'
But I cannot multi-verify neither, cause I am the same user who did the last
multi-verification:
>>> isTransitionAllowed(control, "multi_verify")
False
>>> try_transition(control, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(control)
'to_be_verified'
And the system is configured to not allow same user to verify multiple times:
>>> bikasetup.getTypeOfmultiVerification()
'self_multi_disabled'
But I can multi-verify if I change the type of multi-verification:
>>> bikasetup.setTypeOfmultiVerification('self_multi_enabled')
>>> isTransitionAllowed(control, "multi_verify")
True
>>> try_transition(control, "multi_verify", "to_be_verified")
True
The status remains to to_be_verified:
>>> api.get_workflow_status_of(control)
'to_be_verified'
Since there is only one verification remaining, I cannot multi-verify again:
>>> control.getNumberOfRemainingVerifications()
1
>>> isTransitionAllowed(control, "multi_verify")
False
>>> try_transition(control, "multi_verify", "to_be_verified")
False
>>> api.get_workflow_status_of(control)
'to_be_verified'
But now, I can verify:
>>> isTransitionAllowed(control, "verify")
True
>>> try_transition(control, "verify", "verified")
True
There is no verifications remaining:
>>> control.getNumberOfRemainingVerifications()
0
And the status of the control is now verified:
>>> api.get_workflow_status_of(control)
'verified'
While the rest remain in to_be_verified state because the regular analysis
hasn’t been verified yet:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
If we multi-verify the regular analysis (2+1 times):
>>> analysis = ar.getAnalyses(full_objects=True)[0]
>>> try_transition(analysis, "multi_verify", "to_be_verified")
True
>>> try_transition(analysis, "multi_verify", "to_be_verified")
True
>>> try_transition(analysis, "verify", "verified")
True
The rest transition to to_be_verified:
>>> api.get_workflow_status_of(ar)
'verified'
>>> api.get_workflow_status_of(worksheet)
'verified'
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Check permissions for Multi verify transition
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Set the number of required verifications to 3:
>>> bikasetup.setNumberOfRequiredVerifications(3)
Set the multi-verification to “Allow same user to verify multiple times”:
>>> bikasetup.setTypeOfmultiVerification('self_multi_enabled')
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> submit_regular_analyses(worksheet)
Get the control and submit:
>>> control = worksheet.getReferenceAnalyses()[0]
>>> control.setResult(12)
>>> try_transition(control, "submit", "to_be_verified")
True
Exactly these roles can multi_verify:
>>> get_roles_for_permission("senaite.core: Transition: Verify", control)
['LabManager', 'Manager', 'Verifier']
Current user can multi_verify because has the LabManager role:
>>> isTransitionAllowed(control, "multi_verify")
True
Also if the user has the roles Manager or Verifier:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(control, "multi_verify")
True
>>> setRoles(portal, TEST_USER_ID, ['Verifier',])
>>> isTransitionAllowed(control, "multi_verify")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Analyst', 'Authenticated', 'LabClerk'])
>>> isTransitionAllowed(control, "multi_verify")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(control, "multi_verify")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(control, "multi_verify")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
And to ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
IVerified interface is provided by fully verified controls
Controls do not provide IVerified unless fully verified:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.setNumberOfRequiredVerifications(2)
>>> bikasetup.setTypeOfmultiVerification("self_multi_enabled")
>>> sample = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(sample, control_sample)
>>> control = worksheet.getReferenceAnalyses()[0]
>>> control.setResult(12)
>>> success = do_action_for(control, "submit")
>>> IVerified.providedBy(control)
False
>>> success = do_action_for(control, "multi_verify")
>>> IVerified.providedBy(control)
False
>>> success = do_action_for(control, "verify")
>>> IVerified.providedBy(control)
True
>>> bikasetup.setSelfVerificationEnabled(False)
Reference Analysis retract guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowReferenceAnalysisControlRetract
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IRetracted
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_reference(ar, reference):
... worksheet = api.create(portal.worksheets, "Worksheet")
... service_uids = list()
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... service_uids.append(analysis.getServiceUID())
... worksheet.addReferenceAnalyses(reference, service_uids)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def submit_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> control_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Control definition")
>>> control_refs = [{'uid': api.get_uid(Cu), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '15', 'min': '14.5', 'max': '15.5'},]
>>> control_def.setReferenceResults(control_refs)
>>> control_sample = api.create(supplier, "ReferenceSample", title="Control",
... ReferenceDefinition=control_def,
... Blank=False, ExpiryDate=date_future,
... ReferenceResults=control_refs)
Retract transition and guard basic constraints
Create an Analysis Request and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> submit_regular_analyses(worksheet)
Get the reference and submit:
>>> reference = worksheet.getReferenceAnalyses()[0]
>>> reference.setResult(12)
>>> try_transition(reference, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(reference)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
Retract the reference:
>>> try_transition(reference, "retract", "retracted")
True
>>> api.get_workflow_status_of(reference)
'retracted'
And one new additional reference has been added in assigned state:
>>> references = worksheet.getReferenceAnalyses()
>>> sorted(map(api.get_workflow_status_of, references))
['assigned', 'retracted']
And the Worksheet has been transitioned to open:
>>> api.get_workflow_status_of(worksheet)
'open'
While the Analysis Request is still in to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
The new analysis is a copy of retracted one:
>>> retest = filter(lambda an: api.get_workflow_status_of(an) == "assigned", references)[0]
>>> retest.getKeyword() == reference.getKeyword()
True
>>> retest.getReferenceAnalysesGroupID() == reference.getReferenceAnalysesGroupID()
True
>>> retest.getRetestOf() == reference
True
>>> reference.getRetest() == retest
True
>>> retest.getAnalysisService() == reference.getAnalysisService()
True
And keeps the same results as the retracted one:
>>> retest.getResult() == reference.getResult()
True
And is located in the same slot as well:
>>> worksheet.get_slot_position_for(reference) == worksheet.get_slot_position_for(retest)
True
If I submit the result for the new reference:
>>> try_transition(retest, "submit", "to_be_verified")
True
The status of both the reference and the Worksheet is “to_be_verified”:
>>> api.get_workflow_status_of(retest)
'to_be_verified'
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And I can even retract the retest:
>>> try_transition(retest, "retract", "retracted")
True
>>> api.get_workflow_status_of(retest)
'retracted'
And one new additional reference has been added in assigned state:
>>> references = worksheet.getReferenceAnalyses()
>>> sorted(map(api.get_workflow_status_of, references))
['assigned', 'retracted', 'retracted']
And the Worksheet has been transitioned to open:
>>> api.get_workflow_status_of(worksheet)
'open'
Retract transition when reference analyses from same Reference Sample are added
When analyses from same Reference Sample are added in a worksheet, the
worksheet allocates different slots for them, although each of the slots keeps
the container the analysis belongs to (in this case the same Reference Sample).
Hence, when retracting a reference analysis, the retest must be added in the
same position as the original, regardless of how many reference analyses from
same reference sample exist.
Further information: https://github.com/senaite/senaite.core/pull/1179
Create an Analysis Request:
>>> ar = new_ar([Cu])
>>> worksheet = api.create(portal.worksheets, "Worksheet")
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
Add same reference sample twice:
>>> ref_1 = worksheet.addReferenceAnalyses(control_sample, [api.get_uid(Cu)])[0]
>>> ref_2 = worksheet.addReferenceAnalyses(control_sample, [api.get_uid(Cu)])[0]
>>> ref_1 != ref_2
True
Get the reference analyses positions:
>>> ref_1_pos = worksheet.get_slot_position_for(ref_1)
>>> ref_1_pos
1
>>> ref_2_pos = worksheet.get_slot_position_for(ref_2)
>>> ref_2_pos
2
Submit both:
>>> ref_1.setResult(12)
>>> ref_2.setResult(13)
>>> try_transition(ref_1, "submit", "to_be_verified")
True
>>> try_transition(ref_2, "submit", "to_be_verified")
True
Retract the first reference analysis. The retest has been added in same slot:
>>> try_transition(ref_1, "retract", "retracted")
True
>>> retest_1 = ref_1.getRetest()
>>> worksheet.get_slot_position_for(retest_1)
1
And the same if we retract the second reference analysis:
>>> try_transition(ref_2, "retract", "retracted")
True
>>> retest_2 = ref_2.getRetest()
>>> worksheet.get_slot_position_for(retest_2)
2
IRetracted interface is provided by retracted controls
When retracted, control analyses are marked with the IRetracted interface:
>>> sample = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(sample, control_sample)
>>> reference = worksheet.getReferenceAnalyses()[0]
>>> reference.setResult(12)
>>> success = do_action_for(reference, "submit")
>>> IRetracted.providedBy(reference)
False
>>> success = do_action_for(reference, "retract")
>>> IRetracted.providedBy(reference)
True
But the retest does not provide IRetracted:
>>> retest = reference.getRetest()
>>> IRetracted.providedBy(retest)
False
Reference Analysis (Controls) submission guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowReferenceAnalysisControlSubmit
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_reference(ar, reference):
... worksheet = api.create(portal.worksheets, "Worksheet")
... service_uids = list()
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... service_uids.append(analysis.getServiceUID())
... worksheet.addReferenceAnalyses(reference, service_uids)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> control_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Control definition")
>>> control_refs = [{'uid': api.get_uid(Cu), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '15', 'min': '14.5', 'max': '15.5'},]
>>> control_def.setReferenceResults(control_refs)
>>> control_sample = api.create(supplier, "ReferenceSample", title="Control",
... ReferenceDefinition=control_def,
... control=False, ExpiryDate=date_future,
... ReferenceResults=control_refs)
control submission basic constraints
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu, Fe, Au])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> submit_regular_analyses(worksheet)
Get control analyses:
>>> controls = worksheet.getReferenceAnalyses()
>>> control_1 = controls[0]
>>> control_2 = controls[1]
>>> control_3 = controls[2]
Cannot submit a control without a result:
>>> try_transition(control_1, "submit", "to_be_verified")
False
Even if we try with an empty or None result:
>>> control_1.setResult('')
>>> try_transition(control_1, "submit", "to_be_verified")
False
>>> control_1.setResult(None)
>>> try_transition(control_1, "submit", "to_be_verified")
False
But will work if we try with a result of 0:
>>> control_1.setResult(0)
>>> try_transition(control_1, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(control_1)
'to_be_verified'
And we cannot re-submit a control that have been submitted already:
>>> try_transition(control_1, "submit", "to_be_verified")
False
Auto submission of a Worksheets when all its analyses are submitted
Create a Worksheet:
>>> ar = new_ar([Cu, Fe, Au])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
Set results and submit all analyses from the worksheet except controls:
>>> for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... transitioned = do_action_for(analysis, "submit")
>>> map(api.get_workflow_status_of, worksheet.getRegularAnalyses())
['to_be_verified', 'to_be_verified', 'to_be_verified']
While the Analysis Request has been transitioned to to_be_verified:
>>> api.get_workflow_status_of(ar)
'to_be_verified'
The worksheet has not been transitioned:
>>> api.get_workflow_status_of(worksheet)
'open'
Because controls are still in assigned state:
>>> map(api.get_workflow_status_of, worksheet.getReferenceAnalyses())
['assigned', 'assigned', 'assigned']
If we set results and submit controls:
>>> for analysis in worksheet.getReferenceAnalyses():
... analysis.setResult(0)
... transitioned = do_action_for(analysis, "submit")
>>> map(api.get_workflow_status_of, worksheet.getReferenceAnalyses())
['to_be_verified', 'to_be_verified', 'to_be_verified']
The worksheet will automatically be submitted too:
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
Submission of controls with interim fields set
Set interims to the analysis Au:
>>> Au.setInterimFields([
... {"keyword": "interim_1", "title": "Interim 1",},
... {"keyword": "interim_2", "title": "Interim 2",}])
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Au])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> submit_regular_analyses(worksheet)
Get control analyses:
>>> control = worksheet.getReferenceAnalyses()[0]
Cannot submit if no result is set:
>>> try_transition(control, "submit", "to_be_verified")
False
But even if we set a result, we cannot submit because interims are missing:
>>> control.setResult(12)
>>> control.getResult()
'12'
>>> try_transition(control, "submit", "to_be_verified")
False
So, if the control has interims defined, all them are required too:
>>> control.setInterimValue("interim_1", 15)
>>> control.getInterimValue("interim_1")
'15'
>>> control.getInterimValue("interim_2")
''
>>> try_transition(control, "submit", "to_be_verified")
False
Even if we set a non-valid (None, empty) value to an interim:
>>> control.setInterimValue("interim_2", None)
>>> control.getInterimValue("interim_2")
''
>>> try_transition(control, "submit", "to_be_verified")
False
>>> control.setInterimValue("interim_2", '')
>>> control.getInterimValue("interim_2")
''
>>> try_transition(control, "submit", "to_be_verified")
False
But it will work if the value is 0:
>>> control.setInterimValue("interim_2", 0)
>>> control.getInterimValue("interim_2")
'0'
>>> try_transition(control, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(control)
'to_be_verified'
Might happen the other way round. We set interims but not a result:
>>> ar = new_ar([Au])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> submit_regular_analyses(worksheet)
>>> control = worksheet.getReferenceAnalyses()[0]
>>> control.setInterimValue("interim_1", 10)
>>> control.setInterimValue("interim_2", 20)
>>> try_transition(control, "submit", "to_be_verified")
False
Still, the result is required:
>>> control.setResult(12)
>>> try_transition(control, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(control)
'to_be_verified'
Submission of control analysis with interim calculation
If a control analysis have a calculation assigned, the result will be calculated
automatically based on the calculation. If the calculation have interims set,
only those that do not have a default value set will be required.
Prepare the calculation and set the calculation to analysis Au:
>>> Au.setInterimFields([])
>>> calc = api.create(bikasetup.bika_calculations, 'Calculation', title='Test Calculation')
>>> interim_1 = {'keyword': 'IT1', 'title': 'Interim 1', 'value': 10}
>>> interim_2 = {'keyword': 'IT2', 'title': 'Interim 2', 'value': 2}
>>> interim_3 = {'keyword': 'IT3', 'title': 'Interim 3', 'value': ''}
>>> interim_4 = {'keyword': 'IT4', 'title': 'Interim 4', 'value': None}
>>> interim_5 = {'keyword': 'IT5', 'title': 'Interim 5'}
>>> interims = [interim_1, interim_2, interim_3, interim_4, interim_5]
>>> calc.setInterimFields(interims)
>>> calc.setFormula("[IT1]+[IT2]+[IT3]+[IT4]+[IT5]")
>>> Au.setCalculation(calc)
Create a Worksheet with control:
>>> ar = new_ar([Au])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
Cannot submit if no result is set
>>> control = worksheet.getReferenceAnalyses()[0]
>>> try_transition(control, "submit", "to_be_verified")
False
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
>>> control.setResult(34)
Set a value for interim IT5:
>>> control.setInterimValue("IT5", 5)
Cannot transition because IT3 and IT4 have None/empty values as default:
>>> try_transition(control, "submit", "to_be_verified")
False
Let’s set a value for those interims:
>>> control.setInterimValue("IT3", 3)
>>> try_transition(control, "submit", "to_be_verified")
False
>>> control.setInterimValue("IT4", 4)
Since interims IT1 and IT2 have default values set, the analysis will submit:
>>> try_transition(control, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(control)
'to_be_verified'
Submission of controls with dependencies
controls with dependencies are not allowed. controls can only be created
from analyses without dependents.
TODO Might we consider to allow the creation of controls with dependencies?
Reset the interim fields for analysis Au:
>>> Au.setInterimFields([])
Prepare a calculation that depends on Cu and assign it to Fe analysis:
>>> calc_fe = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Fe')
>>> calc_fe.setFormula("[Cu]*10")
>>> Fe.setCalculation(calc_fe)
Prepare a calculation that depends on Fe and assign it to Au analysis:
>>> calc_au = api.create(bikasetup.bika_calculations, 'Calculation', title='Calc for Au')
>>> interim_1 = {'keyword': 'IT1', 'title': 'Interim 1'}
>>> calc_au.setInterimFields([interim_1])
>>> calc_au.setFormula("([IT1]+[Fe])/2")
>>> Au.setCalculation(calc_au)
Create an Analysis Request:
>>> ar = new_ar([Cu, Fe, Au])
Create a Worksheet with control:
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> analyses = worksheet.getRegularAnalyses()
Only one control created for Cu, cause is the only analysis that does not
have dependents:
>>> controls = worksheet.getReferenceAnalyses()
>>> len(controls) == 1
True
>>> control = controls[0]
>>> control.getKeyword()
'Cu'
TODO This should not be like this, but the calculation is performed by
ajaxCalculateAnalysisEntry. The calculation logic must be moved to
‘api.analysis.calculate`:
Cannot submit routine Fe cause there is no result for routine analysis Cu
and the control of Cu cannot be used as a dependent:
>>> fe_analysis = filter(lambda an: an.getKeyword()=="Fe", analyses)[0]
>>> try_transition(fe_analysis, "submit", "to_be_verified")
False
Check permissions for Submit transition
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> submit_regular_analyses(worksheet)
Set a result:
>>> control = worksheet.getReferenceAnalyses()[0]
>>> control.setResult(23)
Exactly these roles can submit:
>>> get_roles_for_permission("senaite.core: Edit Results", control)
['Analyst', 'LabManager', 'Manager']
And these roles can view results:
>>> get_roles_for_permission("senaite.core: View Results", control)
['Analyst', 'LabClerk', 'LabManager', 'Manager', 'RegulatoryInspector']
Current user can submit because has the LabManager role:
>>> isTransitionAllowed(control, "submit")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Authenticated', 'LabClerk', 'RegulatoryInspector', 'Sampler'])
>>> isTransitionAllowed(control, "submit")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(control, "submit")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(control, "submit")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
Reference Analysis (Control) verification guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowReferenceAnalysisControlVerify
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IVerified
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def to_new_worksheet_with_reference(ar, reference):
... worksheet = api.create(portal.worksheets, "Worksheet")
... service_uids = list()
... for analysis in ar.getAnalyses(full_objects=True):
... worksheet.addAnalysis(analysis)
... service_uids.append(analysis.getServiceUID())
... worksheet.addReferenceAnalyses(reference, service_uids)
... return worksheet
>>> def submit_regular_analyses(worksheet):
... for analysis in worksheet.getRegularAnalyses():
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def try_transition(object, transition_id, target_state_id):
... success = do_action_for(object, transition_id)[0]
... state = api.get_workflow_status_of(object)
... return success and state == target_state_id
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> control_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Control definition")
>>> control_refs = [{'uid': api.get_uid(Cu), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '15', 'min': '14.5', 'max': '15.5'},]
>>> control_def.setReferenceResults(control_refs)
>>> control_sample = api.create(supplier, "ReferenceSample", title="Control",
... ReferenceDefinition=control_def,
... Blank=False, ExpiryDate=date_future,
... ReferenceResults=control_refs)
Control verification basic constraints
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> submit_regular_analyses(worksheet)
Get the control and submit:
>>> control = worksheet.getReferenceAnalyses()[0]
>>> control.setResult(0)
>>> try_transition(control, "submit", "to_be_verified")
True
>>> api.get_workflow_status_of(control)
'to_be_verified'
I cannot verify the control because I am the same user who submitted:
>>> try_transition(control, "verify", "verified")
False
>>> api.get_workflow_status_of(control)
'to_be_verified'
And I cannot verify the Worksheet, because it can only be verified once all
analyses it contains are verified (and this is done automatically):
>>> try_transition(worksheet, "verify", "verified")
False
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
But if I enable self-verification:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Then, I can verify my own result:
>>> try_transition(control, "verify", "verified")
True
And the worksheet transitions to verified:
>>> api.get_workflow_status_of(worksheet)
'to_be_verified'
And we cannot re-verify a control that has been verified already:
>>> try_transition(control, "verify", "verified")
False
To ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
Check permissions for Verify transition
Enable self verification of results:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> bikasetup.getSelfVerificationEnabled()
True
Create a Worksheet and submit regular analyses:
>>> ar = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(ar, control_sample)
>>> submit_regular_analyses(worksheet)
Get the control and submit:
>>> control = worksheet.getReferenceAnalyses()[0]
>>> control.setResult(12)
>>> try_transition(control, "submit", "to_be_verified")
True
Exactly these roles can verify:
>>> get_roles_for_permission("senaite.core: Transition: Verify", control)
['LabManager', 'Manager', 'Verifier']
Current user can verify because has the LabManager role:
>>> isTransitionAllowed(control, "verify")
True
Also if the user has the roles Manager or Verifier:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(control, "verify")
True
>>> setRoles(portal, TEST_USER_ID, ['Verifier',])
>>> isTransitionAllowed(control, "verify")
True
But cannot for other roles:
>>> setRoles(portal, TEST_USER_ID, ['Analyst', 'Authenticated', 'LabClerk'])
>>> isTransitionAllowed(control, "verify")
False
Even if is Owner
>>> setRoles(portal, TEST_USER_ID, ['Owner'])
>>> isTransitionAllowed(control, "verify")
False
And Clients cannot neither:
>>> setRoles(portal, TEST_USER_ID, ['Client'])
>>> isTransitionAllowed(control, "verify")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
And to ensure consistency amongst tests, we disable self-verification:
>>> bikasetup.setSelfVerificationEnabled(False)
>>> bikasetup.getSelfVerificationEnabled()
False
IVerified interface is provided by verified controls
When verified, blank analyses are marked with the IVerified interface:
>>> bikasetup.setSelfVerificationEnabled(True)
>>> sample = new_ar([Cu])
>>> worksheet = to_new_worksheet_with_reference(sample, control_sample)
>>> control = worksheet.getReferenceAnalyses()[0]
>>> control.setResult(12)
>>> success = do_action_for(control, "submit")
>>> IVerified.providedBy(control)
False
>>> success = do_action_for(control, "verify")
>>> IVerified.providedBy(control)
True
>>> bikasetup.setSelfVerificationEnabled(False)
Worksheet auto-transitions
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowWorksheetAutotransitions
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from DateTime import DateTime
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': DateTime(),
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... do_action_for(ar, "receive")
... return ar
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(setup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Retract transition and guard basic constraints
Create a Worksheet:
>>> ar = new_ar([Cu, Fe, Au])
>>> ws = api.create(portal.worksheets, "Worksheet")
>>> for analysis in ar.getAnalyses(full_objects=True):
... ws.addAnalysis(analysis)
The status of the worksheet is “open”:
>>> api.get_workflow_status_of(ws)
'open'
If we submit all analyses from the Worksheet except 1:
>>> analyses = ws.getAnalyses()
>>> for analysis in analyses[1:]:
... analysis.setResult(12)
... success = do_action_for(analysis, "submit")
The Worksheet remains in “open” status:
>>> api.get_workflow_status_of(ws)
'open'
If now we remove the remaining analysis:
>>> ws.removeAnalysis(analyses[0])
The Worksheet is submitted automatically because all analyses it contains have
been submitted already:
>>> api.get_workflow_status_of(ws)
'to_be_verified'
If we add the analysis again:
>>> ws.addAnalysis(analyses[0])
The worksheet is rolled-back to open again:
>>> api.get_workflow_status_of(ws)
'open'
If we remove again the analysis and verify the rest:
>>> ws.removeAnalysis(analyses[0])
>>> api.get_workflow_status_of(ws)
'to_be_verified'
>>> setup.setSelfVerificationEnabled(True)
>>> for analysis in analyses[1:]:
... success = do_action_for(analysis, "verify")
>>> setup.setSelfVerificationEnabled(False)
The worksheet is verified automatically too:
>>> api.get_workflow_status_of(ws)
'verified'
And we cannot add analyses anymore:
>>> ws.addAnalysis(analyses[0])
>>> api.get_workflow_status_of(ws)
'verified'
>>> not analyses[0].getWorksheet()
True
>>> analyses[0] in ws.getAnalyses()
False
Worksheet remove guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowWorksheetRemove
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from DateTime import DateTime
>>> from bika.lims import api
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': DateTime(),
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... do_action_for(ar, "receive")
... return ar
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> setup = portal.bika_setup
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(setup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(setup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(setup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(setup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(setup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(setup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(setup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Retract transition and guard basic constraints
Create a Worksheet:
>>> ar = new_ar([Cu, Fe, Au])
>>> ws = api.create(portal.worksheets, "Worksheet")
>>> for analysis in ar.getAnalyses(full_objects=True):
... ws.addAnalysis(analysis)
The status of the worksheet is “open”:
>>> api.get_workflow_status_of(ws)
'open'
And is not possible to remove unless empty:
>>> isTransitionAllowed(ws, "remove")
False
>>> for analysis in ws.getAnalyses():
... success = do_action_for(analysis, "unassign")
>>> isTransitionAllowed(ws, "remove")
True
If we do “remove”, the Worksheet object is deleted:
>>> container = ws.aq_parent
>>> len(container.objectValues("Worksheet"))
1
>>> success = do_action_for(ws, "remove")
>>> len(container.objectValues("Worksheet"))
0
Try now for all possible statuses:
>>> analyses = ar.getAnalyses(full_objects=True)
>>> cu = filter(lambda an: an.getKeyword() == "Cu", analyses)[0]
>>> fe = filter(lambda an: an.getKeyword() == "Fe", analyses)[0]
>>> ws = api.create(portal.worksheets, "Worksheet")
>>> ws.addAnalysis(cu)
>>> cu.setResult(12)
>>> success = do_action_for(cu, "submit")
For to_be_verified status:
>>> api.get_workflow_status_of(ws)
'to_be_verified'
>>> isTransitionAllowed(ws, "remove")
False
For rejected status:
>>> success = do_action_for(ws, "reject")
>>> api.get_workflow_status_of(ws)
'rejected'
>>> isTransitionAllowed(ws, "remove")
False
For verified status:
>>> setup.setSelfVerificationEnabled(True)
>>> ws = api.create(portal.worksheets, "Worksheet")
>>> ws.addAnalysis(fe)
>>> fe.setResult(12)
>>> success = do_action_for(fe, "submit")
>>> verified = do_action_for(fe, "verify")
>>> api.get_workflow_status_of(ws)
'verified'
>>> isTransitionAllowed(ws, "remove")
False
>>> setup.setSelfVerificationEnabled(False)
Check permissions for Remove transition
Create an empty Worksheet:
>>> ws = api.create(portal.worksheets, "Worksheet")
The status of the Worksheet is open:
>>> api.get_workflow_status_of(ws)
'open'
Exactly these roles can remove:
>>> get_roles_for_permission("senaite.core: Transition: Remove Worksheet", ws)
['LabManager', 'Manager']
Current user can remove because has the LabManager role:
>>> isTransitionAllowed(ws, "remove")
True
Also if the user has the role Manager:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(ws, "remove")
True
But cannot for other roles:
>>> other_roles = ['Analyst', 'Authenticated', 'LabClerk', 'Verifier']
>>> setRoles(portal, TEST_USER_ID, other_roles)
>>> isTransitionAllowed(ws, "remove")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
Worksheet retract guard and event
Running this test from the buildout directory:
bin/test test_textual_doctests -t WorkflowWorksheetRetract
Test Setup
Needed Imports:
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.interfaces import IRetracted
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.workflow import doActionFor as do_action_for
>>> from bika.lims.workflow import isTransitionAllowed
>>> from DateTime import DateTime
>>> from plone.app.testing import setRoles
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def timestamp(format="%Y-%m-%d"):
... return DateTime().strftime(format)
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
>>> def new_ar(services):
... values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
... service_uids = map(api.get_uid, services)
... ar = create_analysisrequest(client, request, values, service_uids)
... transitioned = do_action_for(ar, "receive")
... return ar
>>> def submit_analyses(ar):
... for analysis in ar.getAnalyses(full_objects=True):
... analysis.setResult(13)
... do_action_for(analysis, "submit")
>>> def get_roles_for_permission(permission, context):
... allowed = set(rolesForPermissionOn(permission, context))
... return sorted(allowed)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> blank_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Blank definition", Blank=True)
>>> blank_refs = [{'uid': api.get_uid(Cu), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '0', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '0', 'min': '0', 'max': '0'},]
>>> blank_def.setReferenceResults(blank_refs)
>>> control_def = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Control definition")
>>> control_refs = [{'uid': api.get_uid(Cu), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Fe), 'result': '10', 'min': '0', 'max': '0'},
... {'uid': api.get_uid(Au), 'result': '15', 'min': '14.5', 'max': '15.5'},]
>>> control_def.setReferenceResults(control_refs)
>>> blank = api.create(supplier, "ReferenceSample", title="Blank",
... ReferenceDefinition=blank_def,
... Blank=True, ExpiryDate=date_future,
... ReferenceResults=blank_refs)
>>> control = api.create(supplier, "ReferenceSample", title="Control",
... ReferenceDefinition=control_def,
... Blank=False, ExpiryDate=date_future,
... ReferenceResults=control_refs)
Retract transition and guard basic constraints
Create a Worksheet:
>>> ar = new_ar([Cu, Fe, Au])
>>> ws = api.create(portal.worksheets, "Worksheet")
>>> for analysis in ar.getAnalyses(full_objects=True):
... ws.addAnalysis(analysis)
The status of the worksheet is “open”:
>>> api.get_workflow_status_of(ws)
'open'
And is not possible to retract when status is “open”:
>>> isTransitionAllowed(ws, "retract")
False
But is possible to retract if the status is “to_be_verified”:
>>> submit_analyses(ar)
>>> list(set(map(api.get_workflow_status_of, ws.getAnalyses())))
['to_be_verified']
>>> api.get_workflow_status_of(ws)
'to_be_verified'
>>> isTransitionAllowed(ws, "retract")
True
The retraction of the worksheet causes all its analyses to be retracted:
>>> do_action_for(ws, "retract")
(True, '')
>>> analyses = ws.getAnalyses()
>>> len(analyses)
6
>>> sorted(map(api.get_workflow_status_of, analyses))
['assigned', 'assigned', 'assigned', 'retracted', 'retracted', 'retracted']
>>> sorted(map(IRetracted.providedBy, analyses))
[False, False, False, True, True, True]
And the Worksheet transitions to “open”:
>>> api.get_workflow_status_of(ws)
'open'
With duplicates and reference analyses, the system behaves the same way:
>>> dups = ws.addDuplicateAnalyses(1)
>>> blanks = ws.addReferenceAnalyses(blank, [Cu.UID(), Fe.UID(), Au.UID()])
>>> controls = ws.addReferenceAnalyses(control, [Cu.UID(), Fe.UID(), Au.UID()])
>>> len(ws.getAnalyses())
15
>>> for analysis in ws.getAnalyses():
... analysis.setResult(10)
... success = do_action_for(analysis, "submit")
>>> analyses = ws.getAnalyses()
>>> sorted(set(map(api.get_workflow_status_of, analyses)))
['retracted', 'to_be_verified']
Since all non-retracted analyses have been submitted, the worksheet status is
to_be_verified:
>>> api.get_workflow_status_of(ws)
'to_be_verified'
The Worksheet can be retracted:
>>> isTransitionAllowed(ws, "retract")
True
>>> do_action_for(ws, "retract")
(True, '')
>>> analyses = ws.getAnalyses()
>>> len(analyses)
27
>>> statuses = map(api.get_workflow_status_of, analyses)
>>> len(filter(lambda st: st == "assigned", statuses))
12
>>> len(filter(lambda st: st == "retracted", statuses))
15
And the worksheet transitions to “open”:
>>> api.get_workflow_status_of(ws)
'open'
Check permissions for Retract transition
Create a Worksheet and submit results:
>>> ar = new_ar([Cu, Fe, Au])
>>> ws = api.create(portal.worksheets, "Worksheet")
>>> for analysis in ar.getAnalyses(full_objects=True):
... ws.addAnalysis(analysis)
>>> submit_analyses(ar)
The status of the Worksheet and its analyses is to_be_verified:
>>> api.get_workflow_status_of(ws)
'to_be_verified'
>>> analyses = ws.getAnalyses()
>>> list(set(map(api.get_workflow_status_of, analyses)))
['to_be_verified']
Exactly these roles can retract:
>>> get_roles_for_permission("senaite.core: Transition: Retract", ws)
['LabManager', 'Manager']
Current user can verify because has the LabManager role:
>>> isTransitionAllowed(ws, "retract")
True
Also if the user has the role Manager:
>>> setRoles(portal, TEST_USER_ID, ['Manager',])
>>> isTransitionAllowed(ws, "retract")
True
But cannot for other roles:
>>> other_roles = ['Analyst', 'Authenticated', 'LabClerk', 'Verifier']
>>> setRoles(portal, TEST_USER_ID, other_roles)
>>> isTransitionAllowed(ws, "retract")
False
Reset the roles for current user:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
Worksheet - Apply Worksheet Template
Worksheets are the main artifact for planning tests in the laboratory. They are
also used to add reference samples (controls and blancs), duplicates and
aggregate related tests from different Analysis Requests to be processed in a
single run.
Although worksheets can be created manually by the labmanager each time is
required, a better approach is to create them by using Worksheet Templates. In a
Worksheet Template, the labman defines the layout, the number of slots and the
type of analyses (reference or routine) to be placed in each slot, as well as
the Method and Instrument to be assigned. Thus, Worksheet Templates are used for
the semi-automated creation of Worksheets.
This doctest will validate the consistency between the Worksheet and the
Worksheet Template used for its creation. It will also test the correctness of
the worksheet when applying a Worksheet Template in a manually created
Worksheet.
Test Setup
Running this test from the buildout directory:
bin/test -t WorksheetApplyTemplate
Needed Imports:
>>> import re
>>> from AccessControl.PermissionRole import rolesForPermissionOn
>>> from bika.lims import api
>>> from bika.lims.content.analysisrequest import AnalysisRequest
>>> from bika.lims.utils.analysisrequest import create_analysisrequest
>>> from bika.lims.utils import tmpID
>>> from bika.lims.workflow import doActionFor
>>> from bika.lims.workflow import getCurrentState
>>> from bika.lims.workflow import getAllowedTransitions
>>> from DateTime import DateTime
>>> from plone.app.testing import TEST_USER_ID
>>> from plone.app.testing import TEST_USER_PASSWORD
>>> from plone.app.testing import setRoles
Functional Helpers:
>>> def start_server():
... from Testing.ZopeTestCase.utils import startZServer
... ip, port = startZServer()
... return "http://{}:{}/{}".format(ip, port, portal.id)
Variables:
>>> portal = self.portal
>>> request = self.request
>>> bikasetup = portal.bika_setup
>>> date_now = DateTime().strftime("%Y-%m-%d")
>>> date_future = (DateTime() + 5).strftime("%Y-%m-%d")
We need to create some basic objects for the test:
>>> setRoles(portal, TEST_USER_ID, ['LabManager',])
>>> client = api.create(portal.clients, "Client", Name="Happy Hills", ClientID="HH", MemberDiscountApplies=True)
>>> contact = api.create(client, "Contact", Firstname="Rita", Lastname="Mohale")
>>> sampletype = api.create(bikasetup.bika_sampletypes, "SampleType", title="Water", Prefix="W")
>>> labcontact = api.create(bikasetup.bika_labcontacts, "LabContact", Firstname="Lab", Lastname="Manager")
>>> department = api.create(bikasetup.bika_departments, "Department", title="Chemistry", Manager=labcontact)
>>> category = api.create(bikasetup.bika_analysiscategories, "AnalysisCategory", title="Metals", Department=department)
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", Name="Naralabs")
>>> Cu = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Copper", Keyword="Cu", Price="15", Category=category.UID(), Accredited=True)
>>> Fe = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Iron", Keyword="Fe", Price="10", Category=category.UID())
>>> Au = api.create(bikasetup.bika_analysisservices, "AnalysisService", title="Gold", Keyword="Au", Price="20", Category=category.UID())
Create some Analysis Requests, so we can use them as sources for Worksheet cration:
>>> values = {
... 'Client': client.UID(),
... 'Contact': contact.UID(),
... 'DateSampled': date_now,
... 'SampleType': sampletype.UID()}
>>> service_uids = [Cu.UID(), Fe.UID(), Au.UID()]
>>> ar0 = create_analysisrequest(client, request, values, service_uids)
>>> ar1 = create_analysisrequest(client, request, values, service_uids)
>>> ar2 = create_analysisrequest(client, request, values, service_uids)
>>> ar3 = create_analysisrequest(client, request, values, service_uids)
>>> ar4 = create_analysisrequest(client, request, values, service_uids)
>>> ar5 = create_analysisrequest(client, request, values, service_uids)
>>> ar6 = create_analysisrequest(client, request, values, service_uids)
>>> ar7 = create_analysisrequest(client, request, values, service_uids)
>>> ar8 = create_analysisrequest(client, request, values, service_uids)
>>> ar9 = create_analysisrequest(client, request, values, service_uids)
Worksheet Template creation
Create a Worksheet Template, but for Cu and Fe analyses, with the following
layout with 7 slots:
Routine analyses in slots 1, 2, 4
Duplicate analysis from slot 1 in slot 3
Duplicate analysis from slot 4 in slot 5
Control analysis in slot 6
Blank analysis in slot 7
>>> service_uids = [Cu.UID(), Fe.UID()]
>>> layout = [
... {'pos': '1', 'type': 'a',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': ''},
... {'pos': '2', 'type': 'a',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': ''},
... {'pos': '3', 'type': 'd',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': '1'},
... {'pos': '4', 'type': 'a',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': ''},
... {'pos': '5', 'type': 'd',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': '4'},
... {'pos': '6', 'type': 'c',
... 'blank_ref': '',
... 'control_ref': 'jajsjas',
... 'dup': ''},
... {'pos': '7', 'type': 'b',
... 'blank_ref': 'asasasa',
... 'control_ref': '',
... 'dup': ''},
... ]
>>> template = api.create(bikasetup.bika_worksheettemplates, "WorksheetTemplate", title="WS Template Test", Layout=layout, Service=service_uids)
Apply Worksheet Template to a Worksheet
Create a new Worksheet by using this worksheet template:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.applyWorksheetTemplate(template)
Since we haven’t received any analysis requests, this worksheet remains empty:
>>> worksheet.getAnalyses()
[]
>>> worksheet.getLayout()
[]
Receive the Analysis Requests and apply again the Worksheet Template:
>>> performed = doActionFor(ar0, 'receive')
>>> performed = doActionFor(ar1, 'receive')
>>> performed = doActionFor(ar2, 'receive')
>>> performed = doActionFor(ar3, 'receive')
>>> performed = doActionFor(ar4, 'receive')
>>> performed = doActionFor(ar5, 'receive')
>>> performed = doActionFor(ar6, 'receive')
>>> performed = doActionFor(ar7, 'receive')
>>> performed = doActionFor(ar8, 'receive')
>>> performed = doActionFor(ar9, 'receive')
>>> worksheet.applyWorksheetTemplate(template)
Slots 1, 2 and 4 are filled with routine analyses:
>>> worksheet.get_slot_positions(type='a')
[1, 2, 4]
Each slot occupied by routine analyses is assigned to an Analysis Request, so
each time we add an analysis, it will be added into it’s corresponding slot:
>>> container = worksheet.get_container_at(1)
>>> container.UID() == ar0.UID()
True
>>> slot1_analyses = worksheet.get_analyses_at(1)
>>> an_ar = list(set([an.getRequestUID() for an in slot1_analyses]))
>>> an_ar[0] == ar0.UID()
True
>>> [an.getKeyword() for an in slot1_analyses]
['Cu', 'Fe']
Slots 3 and 5 are filled with duplicate analyses:
>>> worksheet.get_slot_positions(type='d')
[3, 5]
>>> dup1 = worksheet.get_analyses_at(3)
>>> len(dup1) == 2
True
>>> list(set([dup.portal_type for dup in dup1]))
['DuplicateAnalysis']
The first duplicate analysis located at slot 3 is a duplicate of the first
analysis from slot 1:
>>> dup_an = dup1[0].getAnalysis()
>>> slot1_analyses[0].UID() == dup_an.UID()
True
But since we haven’t created any reference analysis (neither blank or control),
slots reserved for blank and controls are not occupied:
>>> worksheet.get_slot_positions(type='c')
[]
>>> worksheet.get_slot_positions(type='b')
[]
Remove analyses and Apply Worksheet Template again
Remove analyses located at position 2:
>>> to_del = worksheet.get_analyses_at(2)
>>> worksheet.removeAnalysis(to_del[0])
>>> worksheet.removeAnalysis(to_del[1])
Only slots 1, 4 are filled with routine analyses now:
>>> worksheet.get_slot_positions(type='a')
[1, 4]
Modify the Worksheet Template to allow Au analysis and apply the template to the
same Worksheet again:
>>> service_uids = [Cu.UID(), Fe.UID(), Au.UID()]
>>> template.setService(service_uids)
>>> worksheet.applyWorksheetTemplate(template)
Now, slot 2 is filled again:
>>> worksheet.get_slot_positions(type='a')
[1, 2, 4]
And each slot contains the additional analysis Au:
>>> slot1_analyses = worksheet.get_analyses_at(1)
>>> len(slot1_analyses) == 3
True
>>> an_ar = list(set([an.getRequestUID() for an in slot1_analyses]))
>>> an_ar[0] == ar0.UID()
True
>>> [an.getKeyword() for an in slot1_analyses]
['Cu', 'Fe', 'Au']
As well as in duplicate analyses:
>>> dup1 = worksheet.get_analyses_at(3)
>>> len(dup1) == 3
True
>>> slot3_analyses = worksheet.get_analyses_at(3)
>>> [an.getKeyword() for an in slot3_analyses]
['Cu', 'Fe', 'Au']
Remove a duplicate and add it manually
Remove all duplicate analyses from slot 5:
>>> dup5 = worksheet.get_analyses_at(5)
>>> len(dup5) == 3
True
>>> worksheet.removeAnalysis(dup5[0])
>>> worksheet.removeAnalysis(dup5[1])
>>> worksheet.removeAnalysis(dup5[2])
>>> dup5 = worksheet.get_analyses_at(5)
>>> len(dup5) == 0
True
Add duplicates using the same source routine analysis, located at slot 4, but
manually instead of applying the Worksheet Template:
>>> dups = worksheet.addDuplicateAnalyses(4)
Three duplicate have been added to the worksheet:
>>> [dup.getKeyword() for dup in dups]
['Cu', 'Fe', 'Au']
And these duplicates have been added in the slot number 5, cause this slot is
where this duplicate fits better in accordance with the layout defined in the
worksheet template associated to this worksheet:
>>> dup5 = worksheet.get_analyses_at(5)
>>> [dup.getKeyword() for dup in dup5]
['Cu', 'Fe', 'Au']
>>> dups_uids = [dup.UID() for dup in dups]
>>> dup5_uids = [dup.UID() for dup in dup5]
>>> [dup for dup in dup5_uids if dup not in dups_uids]
[]
But if we remove only one duplicate analysis from slot number 5:
>>> worksheet.removeAnalysis(dup5[0])
>>> dup5 = worksheet.get_analyses_at(5)
>>> [dup.getKeyword() for dup in dup5]
['Fe', 'Au']
And we manually add duplicates for analysis in position 4, a new slot will be
added at the end of the worksheet (slot number 8), cause the slot number 5 is
already occupied and slots 6 and 7, although empty, are reserved for blank and
control:
>>> worksheet.get_analyses_at(8)
[]
>>> dups = worksheet.addDuplicateAnalyses(4)
>>> [dup.getKeyword() for dup in dups]
['Cu', 'Fe', 'Au']
>>> dup8 = worksheet.get_analyses_at(8)
>>> [dup.getKeyword() for dup in dup8]
['Cu', 'Fe', 'Au']
>>> dups_uids = [dup.UID() for dup in dups]
>>> dup8_uids = [dup.UID() for dup in dup8]
>>> [dup for dup in dup8_uids if dup not in dups_uids]
[]
Control and blanks with Worksheet Template
First, create a Reference Definition for blank:
>>> blankdef = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Blank definition", Blank=True)
>>> blank_refs = [{'uid': Cu.UID(), 'result': '0', 'min': '0', 'max': '0', 'error': '0'},
... {'uid': Fe.UID(), 'result': '0', 'min': '0', 'max': '0', 'error': '0'},]
>>> blankdef.setReferenceResults(blank_refs)
And for control:
>>> controldef = api.create(bikasetup.bika_referencedefinitions, "ReferenceDefinition", title="Control definition")
>>> control_refs = [{'uid': Cu.UID(), 'result': '10', 'min': '0.9', 'max': '10.1', 'error': '0.1'},
... {'uid': Fe.UID(), 'result': '10', 'min': '0.9', 'max': '10.1', 'error': '0.1'},]
>>> controldef.setReferenceResults(control_refs)
Then, we create the associated Reference Samples:
>>> blank = api.create(supplier, "ReferenceSample", title="Blank",
... ReferenceDefinition=blankdef,
... Blank=True, ExpiryDate=date_future,
... ReferenceResults=blank_refs)
>>> control = api.create(supplier, "ReferenceSample", title="Control",
... ReferenceDefinition=controldef,
... Blank=False, ExpiryDate=date_future,
... ReferenceResults=control_refs)
Apply the blank and control to the Worksheet Template layout:
>>> layout = template.getLayout()
>>> layout[5] = {'pos': '6', 'type': 'c',
... 'blank_ref': '',
... 'control_ref': controldef.UID(),
... 'dup': ''}
>>> layout[6] = {'pos': '7', 'type': 'b',
... 'blank_ref': blankdef.UID(),
... 'control_ref': '',
... 'dup': ''}
>>> template.setLayout(layout)
Apply the worksheet template again:
>>> worksheet.applyWorksheetTemplate(template)
Blank analyses at slot number 7, but note the reference definition is only for
analyses Cu and Fe:
>>> ans = worksheet.get_analyses_at(7)
>>> [an.getKeyword() for an in ans]
['Cu', 'Fe']
>>> list(set([an.getReferenceType() for an in ans]))
['b']
Control analyses at slot number 6:
>>> ans = worksheet.get_analyses_at(6)
>>> [an.getKeyword() for an in ans]
['Cu', 'Fe']
>>> list(set([an.getReferenceType() for an in ans]))
['c']
Remove Reference Analyses and add them manually
Remove all controls from slot 6:
>>> ans6 = worksheet.get_analyses_at(6)
>>> len(ans6)
2
>>> worksheet.removeAnalysis(ans6[0])
>>> worksheet.removeAnalysis(ans6[1])
>>> worksheet.get_analyses_at(6)
[]
Add a reference analysis, but manually:
>>> ref_ans = worksheet.addReferenceAnalyses(control, [Fe.UID(), Cu.UID()])
>>> [ref.getKeyword() for ref in ref_ans]
['Cu', 'Fe']
These reference analyses have been added in the slot number 6, cause this slot
is where these reference analyses fit better in accordance with the layout
defined in the worksheet template associated to this worksheet:
>>> ref6 = worksheet.get_analyses_at(6)
>>> [ref.getKeyword() for ref in ref6]
['Cu', 'Fe']
>>> refs_uids = [ref.UID() for ref in ref_ans]
>>> ref6_uids = [ref.UID() for ref in ref6]
>>> [ref for ref in ref6_uids if ref not in refs_uids]
[]
But if we remove only one reference analysis from slot number 6:
>>> worksheet.removeAnalysis(ref6[0])
>>> ref6 = worksheet.get_analyses_at(6)
>>> [ref.getKeyword() for ref in ref6]
['Fe']
And we manually add references, a new slot will be added at the end of the
worksheet (slot number 8), cause the slot number 6 is already occupied, as well
as the rest of the slots:
>>> worksheet.get_analyses_at(9)
[]
>>> ref_ans = worksheet.addReferenceAnalyses(control, [Fe.UID(), Cu.UID()])
>>> [ref.getKeyword() for ref in ref_ans]
['Cu', 'Fe']
>>> ref9 = worksheet.get_analyses_at(9)
>>> [ref.getKeyword() for ref in ref9]
['Cu', 'Fe']
>>> refs_uids = [ref.UID() for ref in ref_ans]
>>> ref9_uids = [ref.UID() for ref in ref9]
>>> [ref for ref in ref9_uids if ref not in refs_uids]
[]
Reject any remaining analyses awaiting for assignment:
>>> query = {"portal_type": "Analysis", "review_state": "unassigned"}
>>> objs = map(api.get_object, api.search(query, "senaite_catalog_analysis"))
>>> sucess = map(lambda obj: doActionFor(obj, "reject"), objs)
WorksheetTemplate assignment to a non-empty Worksheet
Worksheet Template can also be used when the worksheet is not empty.
The template has slots available for routine analyses in positions 1, 2 and 4:
>>> layout = template.getLayout()
>>> slots = filter(lambda p: p["type"] == "a", layout)
>>> sorted(map(lambda p: int(p.get("pos")), slots))
[1, 2, 4]
Create 3 samples with ‘Cu’ analyses:
>>> service_uids = [Cu]
>>> samples = map(lambda i: create_analysisrequest(client, request, values, service_uids), range(3))
>>> success = map(lambda s: doActionFor(s, "receive"), samples)
Create a worksheet and apply the template:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.applyWorksheetTemplate(template)
The Sample from first slot contains 1 analysis only (Cu):
>>> first = worksheet.get_container_at(1)
>>> first_analyses = worksheet.get_analyses_at(1)
>>> len(first_analyses)
1
>>> first_analyses[0].getKeyword()
'Cu'
>>> first_analyses[0].getRequest() == first
True
Add “Fe” analysis to the sample from first slot and re-assign the worksheet:
>>> cu = first.getAnalyses(full_objects=True)[0]
>>> first.setAnalyses([cu, Fe])
>>> worksheet.applyWorksheetTemplate(template)
The first slot, booked for the first Sample, contains now ‘Fe’:
>>> first_analyses = worksheet.get_analyses_at(1)
>>> len(first_analyses)
2
>>> map(lambda a: a.getKeyword(), first_analyses)
['Cu', 'Fe']
>>> map(lambda a: a.getRequest() == first, first_analyses)
[True, True]
Add “Fe” analysis to the third Sample (slot #4) and re-assign the worksheet:
>>> third = worksheet.get_container_at(4)
>>> cu = third.getAnalyses(full_objects=True)[0]
>>> third.setAnalyses([cu, Fe])
>>> worksheet.applyWorksheetTemplate(template)
The fourth slot contains now ‘Fe’ too:
>>> third_analyses = worksheet.get_analyses_at(4)
>>> len(third_analyses)
2
>>> map(lambda a: a.getKeyword(), third_analyses)
['Cu', 'Fe']
>>> map(lambda a: a.getRequest() == third, third_analyses)
[True, True]
Create now 5 more samples:
>>> service_uids = [Cu]
>>> samples = map(lambda i: create_analysisrequest(client, request, values, service_uids), range(3))
>>> success = map(lambda s: doActionFor(s, "receive"), samples)
And reassign the template to the worksheet:
>>> worksheet.applyWorksheetTemplate(template)
None of these new samples have been added:
>>> new_samp_uids = map(api.get_uid, samples)
>>> container_uids = map(lambda l: l["container_uid"], worksheet.getLayout())
>>> [u for u in new_samp_uids if u in container_uids]
[]
Add “Fe” analysis to the second Sample and re-assign the worksheet:
>>> second = worksheet.get_container_at(2)
>>> cu = second.getAnalyses(full_objects=True)[0]
>>> second.setAnalyses([cu, Fe])
>>> worksheet.applyWorksheetTemplate(template)
The second slot contains now ‘Fe’ too:
>>> second_analyses = worksheet.get_analyses_at(2)
>>> len(second_analyses)
2
>>> map(lambda a: a.getKeyword(), second_analyses)
['Cu', 'Fe']
>>> map(lambda a: a.getRequest() == second, second_analyses)
[True, True]
While none of the analyses from new samples have been added:
>>> container_uids = map(lambda l: l["container_uid"], worksheet.getLayout())
>>> [u for u in new_samp_uids if u in container_uids]
[]
Reject any remaining analyses awaiting for assignment:
>>> query = {"portal_type": "Analysis", "review_state": "unassigned"}
>>> objs = map(api.get_object, api.search(query, "senaite_catalog_analysis"))
>>> sucess = map(lambda obj: doActionFor(obj, "reject"), objs)
WorksheetTemplate assignment keeps Sample natural order
Analyses are grabbed by using their priority sort key, but samples are sorted
in natural order in the slots.
Create and receive 3 samples:
>>> service_uids = [Cu]
>>> samples = map(lambda i: create_analysisrequest(client, request, values, service_uids), range(3))
>>> success = map(lambda s: doActionFor(s, "receive"), samples)
Create a worksheet and apply the template:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.applyWorksheetTemplate(template)
Slots follows the natural order of the samples:
>>> map(lambda s: worksheet.get_slot_position(s), samples)
[1, 2, 4]
Assignment of a WorksheetTemplate with no services
Create a Worksheet Template without services assigned:
>>> service_uids = []
>>> layout = [
... {'pos': '1', 'type': 'a',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': ''},
... {'pos': '2', 'type': 'a',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': ''},
... ]
>>> empty_template = api.create(bikasetup.bika_worksheettemplates, "WorksheetTemplate", title="WS Template Empty Test", Layout=layout, Service=service_uids)
Create and receive 2 samples:
>>> service_uids = [Cu]
>>> samples = map(lambda i: create_analysisrequest(client, request, values, service_uids), range(2))
>>> success = map(lambda s: doActionFor(s, "receive"), samples)
Create a Worksheet and assign the template:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.applyWorksheetTemplate(empty_template)
Worksheet remains empty:
>>> worksheet.getAnalyses()
[]
Assignment of Worksheet Template with Instrument
When a Worksheet Template has an instrument assigned, only analyses that can be
performed with that same instrument are added in the worksheet.
Create a new Instrument:
>>> instr_type = api.create(bikasetup.bika_instrumenttypes, "InstrumentType", title="Temp instrument type")
>>> manufacturer = api.create(bikasetup.bika_manufacturers, "Manufacturer", title="Temp manufacturer")
>>> supplier = api.create(bikasetup.bika_suppliers, "Supplier", title="Temp supplier")
>>> instrument = api.create(bikasetup.bika_instruments,
... "Instrument",
... title="Temp Instrument",
... Manufacturer=manufacturer,
... Supplier=supplier,
... InstrumentType=instr_type)
Create a Worksheet Template and assign the instrument:
>>> service_uids = [Cu]
>>> layout = [
... {'pos': '1', 'type': 'a',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': ''},
... {'pos': '2', 'type': 'a',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': ''},
... ]
>>> instr_template = api.create(bikasetup.bika_worksheettemplates,
... "WorksheetTemplate",
... title="WS Template with instrument",
... Layout=layout,
... Instrument=instrument,
... Service=service_uids)
Reject any previous analyses awaiting for assignment:
>>> query = {"portal_type": "Analysis", "review_state": "unassigned"}
>>> objs = map(api.get_object, api.search(query, "senaite_catalog_analysis"))
>>> success = map(lambda obj: doActionFor(obj, "reject"), objs)
Create and receive 2 samples:
>>> service_uids = [Cu]
>>> samples = map(lambda i: create_analysisrequest(client, request, values, service_uids), range(2))
>>> success = map(lambda s: doActionFor(s, "receive"), samples)
Create a Worksheet and assign the template:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.applyWorksheetTemplate(instr_template)
Worksheet remains empty because the instrument is not allowed for Cu service:
>>> worksheet.getAnalyses()
[]
Assign the Instrument to the Cu service:
>>> Cu.setInstruments([instrument,])
Re-assign the worksheet template:
>>> worksheet.applyWorksheetTemplate(instr_template)
Worksheet contains now the two Cu analyses:
>>> ws_analyses = worksheet.getAnalyses()
>>> all(map(lambda a: a.getRequest() in samples, ws_analyses))
True
Unassign instrument from Cu service:
>>> Cu.setInstruments([])
Reject any remaining analyses awaiting for assignment:
>>> query = {"portal_type": "Analysis", "review_state": "unassigned"}
>>> objs = map(api.get_object, api.search(query, "senaite_catalog_analysis"))
>>> success = map(lambda obj: doActionFor(obj, "reject"), objs)
Assignment of Worksheet Template with Method
When a Worksheet Template has a method assigned, only analyses that can be
performed with that same method are added in the worksheet.
Create a new Method:
>>> method = api.create(portal.methods, "Method", title="Temp method")
Create a Worksheet Template and assign the method:
>>> service_uids = [Cu]
>>> layout = [
... {'pos': '1', 'type': 'a',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': ''},
... {'pos': '2', 'type': 'a',
... 'blank_ref': '',
... 'control_ref': '',
... 'dup': ''},
... ]
>>> method_template = api.create(bikasetup.bika_worksheettemplates,
... "WorksheetTemplate",
... title="WS Template with instrument",
... Layout=layout,
... RestrictToMethod=method,
... Service=service_uids)
Create and receive 2 samples:
>>> service_uids = [Cu]
>>> samples = map(lambda i: create_analysisrequest(client, request, values, service_uids), range(2))
>>> success = map(lambda s: doActionFor(s, "receive"), samples)
Create a Worksheet and assign the template:
>>> worksheet = api.create(portal.worksheets, "Worksheet")
>>> worksheet.applyWorksheetTemplate(method_template)
Worksheet remains empty because the method is not allowed for Cu service:
>>> worksheet.getAnalyses()
[]
Assign the Method to the Cu service:
>>> Cu.setMethods([method, ])
Re-assign the worksheet template:
>>> worksheet.applyWorksheetTemplate(method_template)
The worksheet now contains the two analyses:
>>> ws_analyses = worksheet.getAnalyses()
>>> len(ws_analyses)
2
>>> all(map(lambda a: a.getRequest() in samples, ws_analyses))
True
Unassign method from Cu service:
Reject any remaining analyses awaiting for assignment:
>>> query = {"portal_type": "Analysis", "review_state": "unassigned"}
>>> objs = map(api.get_object, api.search(query, "senaite_catalog_analysis"))
>>> success = map(lambda obj: doActionFor(obj, "reject"), objs)
Worksheet - Worksheet Layouts Utility
Test Setup
Running this test from the buildout directory:
bin/test -t WorksheetLayoutUtility
Required Imports:
>>> from bika.lims.browser.worksheet.tools import getWorksheetLayouts
>>> from bika.lims.config import WORKSHEET_LAYOUT_OPTIONS
>>> from Products.Archetypes.public import DisplayList
Check layouts:
>>> layouts = set(getWorksheetLayouts().keys())
>>> config_layouts = set(DisplayList(WORKSHEET_LAYOUT_OPTIONS).keys())
>>> intersection = layouts.intersection(config_layouts)
>>> len(intersection)
2