12 Planning queries
Planning queries let you send a query to a model for a variety of different purposes - e.g. determine good timeslots, ensure a new job can fit into a route, query for a travel matrix or get ETA for route you pass in.
12.1 Appointment booking walkthrough
Appointment booking involves a conversation between the client system and ODL Live where you PUT a planning query (e.g. several different possible time slots for a delivery), then retrieve and act upon the query result.
ODL Live automatically deletes pending queries and their processed results, so you don’t need to do this after completing the query conversation. Deletion is decided on a ‘delete oldest first’ basis or if a query or result has been present for a long time (e.g. delete after 10 minutes or delete oldest if we have over 1000 queries). You may therefore need to deal with resending a query if it has been deleted, although this will only happen in very exceptional circumstances. For performance reasons, in webapp1 only queries are not currently persisted to the database layer (they are held only in server memory), so in the unlikely event of a server reboot during the conversation you may also need to resend your query.
The query conversation for an asynchronous planning query is as follows:
PUT planning query
The planning query HTTP body is a JSON object of type ODLPlanningQuery. The ODLPlanningQuery object can contain multiple slots to be tested stored as ODLJob objects and acceptance tolerances such as ‘only load if we don’t increase lateness’.
- If response was 404 NOT FOUND the model may not be initialised yet - try again 1 second later.
GET planning query result
The planning query result body is a JSON of type ODLPlanningQueryResult.
- If HTTP response was 404 NOT FOUND, resend original query (this is rare but can happen).
- Otherwise inspect the ODLPlanningQueryResult’s
processedStatus field:
- If processedStatus equals WAITING or STARTED poll again one second later.
- If processedStatus equals ERROR you have an error in your query data (e.g. invalid job format) and the query could not be processed. Sending the query again will likely result in the same error again. An error does not indicate whether the job could be loaded onto a route or not, it just indicates corrupt data.
- If processedStatus equals COMPLETE go to next step.
Choose slot based on query result
The ODLPlanningQueryResult’s costSortedResults field holds an array of ODLJobPlanningQueryResult objects. Each ODLJobPlanningQueryResult holds the query result for a single ODLJob (i.e. possible slot) in your ODLPlanningQuery. ODLJobPlanningQueryResult’s selected field will contain a selected vehicle object (of type ODLVehicle) if one was found passing the query tolerances (e.g. no increased lateness) for the slot. Each input ODLJob in the ODLPlanningQuery will have a result, but this may be empty (no selected vehicle) if the job could not be loaded according to the tolerances. You may choose to offer all valid slots back to your own customer, or only the one that is most efficient for yourselves based on the routing.
PUT the ODLJob corresponding to the slot with an ODLJobAcceptanceRequest
The ODLJobAcceptanceRequest object is a sub-object of ODLJob and tells ODL Live to test the job for acceptance into the model/plan. ODL Live tests the job for acceptance according to the tolerances you specify. If the acceptance fails, then ODL Live will not try to load the job onto a route again.
GET job acceptance / rejectance status
The job acceptance state is stored in a JSON object of type ODLJobAcceptanceState, in its state property. The possible state values are JOB_NOT_FOUND, NO_ACCEPTANCE_REQUEST, PENDING, ACCEPTED, REJECTED. If state is PENDING then repoll every second until the job acceptance state is available.
If job was accepted, then clean-up
- PUT the ODLJob again without the ODLJobAcceptanceRequest sub-object, replacing the version with the ODLJobAcceptanceRequest.
If job was rejected
You have two options, either (a) try accepting another slot if one is available or (b) abandon booking the job completely. Whether you try another slot (e.g. your customer’s 2nd preference) or abandon the job depends on your end customer.
In either case, you should clean-up by deleting the job.
The conversation for a synchronous planning query is near-identical, except you POST the initial planning query and get the result back in the body of your POST (which could take some seconds). See the section on taxi auctions for an example of doing a synchronous query.
ODL Live uses optimistic locking with regards to job acceptance; you specify tolerance(s) (e.g. no increased lateness, no more than 1 hour extra driving…) when you make the planning query. You then specify tolerance(s) again - either the same or different - when you PUT the job for acceptance / rejection. If other jobs have been added between ODL Live processing the planning query and the job acceptance request PUT, it is possible the job will be rejected (although this should be rare).
Only the initial planning query itself is not persisted to the database; the later acceptance / rejection request and its result are always persisted and will still be available after a server reboot.
We demonstrate the appointment booking conversation in the following walkthrough.
12.1.1 Create starting model
Warning, the example JSON data in the following section(s) should contain dates set in the future. ODL Live uses the current date and time in its calculations and will not schedule data in the past. If you are reading this some time after this document was created and the example data is not set in the future, you should change the dates. When you’re testing, you can also override the current time in the model configuration if you want to schedule jobs in past.
The following JSON defines a simple model with one vehicle and two service-type jobs:
{
"data" : {
"jobs" : [ {
"stops" : [ {
"type" : "SERVICE",
"coordinate" : { "latitude" : 51.5074, "longitude" : -0.1001 },
"openTime" : "2029-01-01T09:00",
"lateTime" : "2029-01-01T13:00",
"closeTime" : "2029-01-02T13:00",
"durationMillis" : 3600000,
"_id" : "TateModern1"
} ],
"_id" : "TateModern1" }, {
"stops" : [ {
"type" : "SERVICE",
"coordinate" : { "latitude" : 51.5229, "longitude" : -0.155 },
"openTime" : "2029-01-01T09:00",
"lateTime" : "2029-01-01T13:00",
"closeTime" : "2029-01-02T13:00",
"durationMillis" : 3600000,
"_id" : "MadameTussauds1"
} ],
"_id" : "MadameTussauds1"
} ],
"vehicles" : [ {
"definition" : {
"start" : {
"type" : "START_AT_DEPOT",
"coordinate" : { "latitude" : 51.5416, "longitude" : -0.1462 },
"openTime" : "2029-01-01T08:00" },
"end" : {
"type" : "RETURN_TO_DEPOT",
"coordinate" : { "latitude" : 51.5416, "longitude" : -0.1462 },
"lateTime" : "2029-01-01T18:00",
"closeTime" : "2029-01-02T18:00" },
"costPerTravelHour" : 1.0,
"costPerWaitingHour" : 0.5,
"costPerServicingHour" : 1.0,
"costPerKm" : 1.0E-6,
"costFixed" : 100.0,
"costPerStop" : 0.0 },
"_id" : "Camden1"
} ]
}
}
Now copy and paste the model JSON and use Postman to PUT this model to the following URL, as per the previous section(s).
my-base-URL/models/Appointments1
So, in Postman you should:
- Change the URL.
- Set Basic Authentication username and password.
- Replace my-base-URL with your ODL Live URL.
- Change the HTTP method to PUT.
- Set the request body to the JSON above ensuring its type is JSON (application/JSON).
- Press Send.
You should receive the HTTP response 200 OK. If you GET the model using the same URL (but using the GET method) you will see this data again, with some additional properties assigned by ODL Live.
If you GET the optimiser plan you should see both jobs are loaded.
12.1.2 PUT planning query
The following JSON defines a planning query where the same job is tested for three different one hour time slots with no increase in solution lateness allowed for either the new job or pre-existing jobs.
{
"jobs" : [ {
"stops" : [ {
"type" : "SERVICE",
"coordinate" : { "latitude" : 51.5073, "longitude" : -0.1657 },
"openTime" : "2029-01-01T09:00",
"lateTime" : "2029-01-01T10:00",
"closeTime" : "2029-01-02T10:00",
"durationMillis" : 3600000,
"_id" : "HydePark1"
} ],
"_id" : "HydePark1_9AM" }, {
"stops" : [ {
"type" : "SERVICE",
"coordinate" : { "latitude" : 51.5073, "longitude" : -0.1657 },
"openTime" : "2029-01-01T10:00",
"lateTime" : "2029-01-01T11:00",
"closeTime" : "2029-01-02T11:00",
"durationMillis" : 3600000,
"_id" : "HydePark1"
} ],
"_id" : "HydePark1_10AM" }, {
"stops" : [ {
"type" : "SERVICE",
"coordinate" : { "latitude" : 51.5073, "longitude" : -0.1657 },
"openTime" : "2029-01-01T11:00",
"lateTime" : "2029-01-01T12:00",
"closeTime" : "2029-01-02T12:00",
"durationMillis" : 3600000,
"_id" : "HydePark1"
} ],
"_id" : "HydePark1_11AM"
} ],
"tolerances" : [ {
"type" : "LATENESS_SECONDS",
"tolerance" : 0.0
} ]
}
We omit setting the id of the query object as we set it in the PUT. Note that the job ids for each slot must be unique both within the query and the model. The same restriction does not apply to the stop ids - the same stop ids can be reused within the same query (but cannot also exist in the model). Now PUT the query to the following end-point:
my-base-URL/models/Appointments1/queries/pending/myQuery1
We add the query using id “myQuery1” to the list of pending queries. You should receive the 200 OK response. If the model has not yet been initialised properly in-memory you may receive a 404 NOT FOUND; if this happens try the PUT again a second later (new models should be initialised within a couple of seconds of the model being PUT).
The optimiser will now consider each job (a.k.a. slot) within the query in-turn and decide if they can be loaded and how costly this is (e.g. increased travel time, etc). Each job is considered individually - i.e. the optimiser looks at what happens if it adds job 1 or job 2, but it never looks at what happens if it adds both jobs 1 and 2 at the same time. The jobs within an ODLPlanningQueryResult are therefore treated as alternative slots.
By default when a query is processed ODL Live will attempt to run multiple optimiser iterations per job in the query to get a better estimate of the cost of adding the job. If you want the query to run faster, you can tell ODL Live to still try inserting each job onto the current routes but not to optimise afterwards. This gives a faster but less slightly accurate estimate of the effect of adding the job to the model. You can set a query to do this by setting the insertOnly field to true:
{
"jobs": [],
"tolerances": [
{
"type": "LATENESS_SECONDS",
"tolerance": 0.0
}
],
"insertOnly": true
}
12.1.3 GET planning result
GET the planning query result from the following end-point, using the same id we PUT it with:
my-base-URL/models/Appointments1/queries/results/myQuery1
This should return 200 OK and an ODLPlanningQueryResult object. If the ODLPlanningQueryResult’s processedStatus property equals WAITING or STARTED repoll again one second later until you have status COMPLETE instead. The ODLPlanningQueryResult object should include the following data, as well as some additional data:
{
"query" : ...,
"processedStatus" : "COMPLETE",
"costSortedResults" : [
{...},
{...},
{...}
]
}
The properties in the ODLPlanningQueryResult may appear in a different order. The query property stores the original planning query and the costSortedResults stores the actual results for each input slot (i.e. each ODLJob).
The objects in the costSortedResults array are of type ODLJobPlanningQueryResult. Examine the JSON of the first ODLJobPlanningQueryResult object in the array. This is the best slot as far as the optimiser is concerned. The most important properties are:
- job. The corresponding job object from the planning query, in our case the job with id HydePark1_10AM.
- selected. The selected vehicle for the job (vehicle with id Camden1) or empty if the job could not be loaded.
- estimatedChange. If the slot could be loaded, the estimated changes to solution metrics from loading the job.
- rank. If the optimiser thinks several slots are as good as each other, they will have the same rank.
For the provided model and query data, at least one slot should have been loadable onto the vehicle. The estimatedChange object should have similar values to:
{
"cost" : 1.0280207334082831,
"latenessSeconds" : 0.0,
"travelSeconds" : 100.8665238230659,
"operationTimeSeconds" : 3600.0,
"waitTimeSeconds" : 0.0,
"usedVehicles" : 0
}
Each property defines the increase in the metric associated with adding the job to the solution. For example, imagine both the first and second results in the costSortedResults array have selected vehicles (i.e. could be loaded) but the first result has usedVehicles = 0 and the second result has usedVehicles = 1. The best result therefore didn’t use any extra vehicles but the second-best result caused an extra vehicle to be used.
12.1.4 PUT acceptance request
The following JSON defines the job corresponding to our first (least cost) slot from the planning query result, with an acceptance request added to it:
{
"stops" : [ {
"type" : "SERVICE",
"coordinate" : { "latitude" : 51.5073, "longitude" : -0.1657 },
"openTime" : "2029-01-01T10:00",
"lateTime" : "2029-01-01T11:00",
"closeTime" : "2029-01-02T11:00",
"durationMillis" : 3600000,
"_id" : "HydePark1"
} ],
"acceptanceRequest" : {
"tolerances" : [ {
"type" : "LATENESS_SECONDS",
"tolerance" : 0.0
} ],
"planningQueryId" : "myQuery1"
}
}
Note that we marked the original planningQueryId on the ODLJobAcceptanceRequest. This ensures ODL Live can access the original planning query result, when it tries to accept the job. We then PUT this job to the list of live jobs including its acceptance request using the same job id we used in the planning query:
my-base-URL/models/Appointments1/jobs/HydePark1_10AM
Using the same job id is also essential to properly connect the acceptance request to the original planning query result. Now GET the job acceptance state from the following endpoint:
my-base-URL/models/Appointments1/optimiserstate/jobacceptance/HydePark1_10AM
This endpoint returns an object of type ODLJobAcceptanceState. If you get the following JSON:
{
"jobId" : "HydePark1_10AM",
"state" : "PENDING"
}
where the state is pending, then you need to repoll until the state is either ACCEPTED or REJECTED. This example data should give you a job that gets accepted, but when running this for live if another job is accepted between your planning query and the acceptance request, or if the vehicle state changes (e.g. vehicles start running late), then dependent on your tolerances the job may be rejected. Also, if the planning query id and the job id don’t match between the original planning query and the acceptance request, it is theoretically possible for larger problems you may get a rejection without any state changes.
In this acceptance request we didn’t lock the job down to the selected vehicle, only to a particular time slot. If you wanted to lock the job down to the vehicle as well, use skills modelling.
Assuming your job is accepted, the very last step is to cleanup by deleting the now-processed acceptance request. PUT the job again to the same endpoint but with the acceptance request removed as per below:
{
"stops" : [ {
"type" : "SERVICE",
"coordinate" : { "latitude" : 51.5073, "longitude" : -0.1657 },
"openTime" : "2029-01-01T10:00",
"lateTime" : "2029-01-01T11:00",
"closeTime" : "2029-01-02T11:00",
"durationMillis" : 3600000,
"_id" : "HydePark1"
} ],
"_id" : "HydePark1_10AM"
}
After finishing this walkthrough, don’t forget to delete your model.
DELETE my-base-URL/models/Appointments1
12.2 ETA queries (stateless or stateful)
An ETA query lets you get updated estimated time of arrivals (ETAs) for stops in a route or routes, taking into account factors like the driver’s current GPS location. The stops and vehicle do not need to be held within a live model (i.e. they can just be defined within the query). ETA queries can be either stateless or stateful (though stateless is the most obvious application for them).
Stateless ETA query. To run stateless ETA queries you would first do a one-off configuration step where you PUT a dummy model to ODL Live, which typically just holds the distances configuration but no jobs, vehicles or vehicle events (i.e. it holds no actual data apart from distances settings, it is stateless). The dummy model sits on your system forever and never needs updating. Next, when you want to query for updated ETAs for a specific route or routes, you would create an ETA query JSON object which defines the route(s) - and so contains (a) a vehicle record, (b) an ordered list of stop ids for the route and (c) a list of job objects including all the stops on the route. You would then POST this ETA query object to the planning query URL for your dummy model and receive the updated ETAs for the route back in the HTTP response. This process is stateless because the dummy model doesn’t hold any state - it just holds your distances settings (its main purpose is actually to ensure the road network graph is loaded and ready). The actual stateful data (jobs, vehicle and its plan) only exists for the duration of the query, which means you don’t have to maintain this state within the ODL Live model.
Stateful ETA query. Normally ETAs in a live model only get updated when the optimiser runs an optimisation burst. If you have lots of models running in parallel, this might only happen a couple of times per minute. If you want updated ETAs instantly for the model, you can run a stateful ETA query against it instead and get them straightaway.
Stateless and stateful queries can be blended together, for example you can have a live model running with jobs and vehicles, and then you do an ETA query which fetches ETAs using the live model jobs and vehicles but with a different plan to the current live plan (where you specify this plan in the ETA query object).
12.2.1 Walkthrough example
The structure of an ETA query JSON is as follows:
{
"queryType" : "ETA",
"etarequest" : {
"data" : {
"vehicles" : [
... vehicle object(s) you want updated ETAs for
} ],
"jobs" : [
... job objects on the route(s)
]
},
"configuration" : {
... parts of dummy model configuration you want to change
},
"plans" : [ {
... routes you want ETAs for
} ]
}
}
The following JSON defines a simple ETA query using this structure, which queries for ETAs for a single route containing three stops. (This JSON is also available in the file supporting-data-for-docs\example-models\simple-eta-query\simple-3-stop-ETA-query.json in the supporting-data-for-docs provided to ODL Live self-hosting subscribers):
{
"queryType" : "ETA",
"etarequest" : {
"data" : {
"vehicles" : [ {
"definition" : {
"start" : {
"type" : "START_AT_DEPOT",
"coordinate" : {"latitude" : 51.5416,"longitude" : -0.1462
},
"openTime" : "2001-01-01T08:00",
"_id" : "kHrv1383QQmpwP_homg5ww=="
},
"end" : {
"type" : "RETURN_TO_DEPOT",
"coordinate" : { "latitude" : 51.5416,"longitude" : -0.1462},
"closeTime" : "2001-02-12T00:00",
"_id" : "zjcrx2FHREey7jKIQCe8Xg=="
}
},
"_id" : "v0"
} ],
"jobs" : [ {
"stops" : [ {
"type" : "DELIVER",
"coordinate" : {
"latitude" : 51.55682810601671,"longitude" : 0.10038226934439609
},
"_id" : "s2"
} ],
"_id" : "j2"
}, {
"stops" : [ {
"type" : "DELIVER",
"coordinate" : {
"latitude" : 51.54792521152047,"longitude" : 0.17358325662114982
},
"_id" : "s0"
} ],
"_id" : "j0"
}, {
"stops" : [ {
"type" : "DELIVER",
"coordinate" : {
"latitude" : 51.497338282453036,"longitude" : -0.06861027643520673
},
"_id" : "s1"
} ],
"_id" : "j1"
} ]
},
"configuration" : {
"timeOverride" : {
"override" : "2001-01-01T00:00",
"overrideType" : "SCHEDULER"
}
},
"plans" : [ {
"vehicleId" : "v0",
"stopIds" : [ "s2", "s0", "s1" ]
} ]
}
}
As both the plans and vehicles fields are arrays, you can send either single or multiple routes at once to have their ETAs updated. In this example we just send one route, with the following objects and fields defined as follows:
Top level field queryType is set to “ETA” (this must be set or you will get an error).
All data and configuration for the ETA request are set inside the etarequest object:
The vehicle for the route is defined in etarequest.data.vehicles[0]. This includes the vehicle start/end time and coordinates (start/end coordinates can be left blank though). If you have any stops already dispatched to the vehicle, you could also include them in the vehicle’s etarequest.data.vehicles[0].dispatches list and then their ETAs would appear in the returned plan under dispatchedIncomplete (providing you don’t have completion events for them).
The three jobs on the route are defined in etarequest.data.jobs. etarequest.plans[0].stopIds references the stop ids and not the job ids of these jobs.
We override the current time used for calculations in etarequest.configuration.timeOverride. The optimiser assumes the vehicle can’t depart for planned stops until the current time, so the current time forms part of the equation. If you don’t override it (i.e. don’t include the timeOverride field), the current real-world time is used, which you would generally want to do for updating live ETAs. See section on setting time override for more details.
We include the plan for which we want ETAs in etarequest.plans[0], setting vehicleId and stopIds fields appropriately. Note that if stops ids in etarequest.plans[0].stopIds are not present within the defined stops inside the job objects, timings related to them will not be included in the ETAs but ETAs will still be generated (i.e. the missing stops are dropped from the route).
To run this ETA query we need a dummy model to run it against, which would normally hold the distances settings. In particular, when using a road network graph, the dummy model should reference the road network graph in its model.configuration.distances object, so the road network graph gets loaded by the system. For this example, we’re going to keep things simple though and so we’re going to use straight lines instead of a road network graph for travel calculation. Our dummy model can therefore be 100% empty - i.e. it needs no content. To PUT the empty model to ODL Live we do:
PUT my-base-URL/models/dummymodel
with an empty JSON body content for the empty model:
{
}
You can run an ETA query using a synchronous or asynchronous query (see section on appointment bookings for more details on synchronous vs asynchronous queries). Synchronous queries, (where you get the ETAs in the return HTTP body), are easier to use although can potentially timeout under a heavy server load. We use a synchronous query for this example. POST the ETA query JSON to the following URL:
POST my-base-URL/models/dummymodel/queries/synchronous/
You must have the forward slash on the end of the URL or you will get an error. This example returns the following JSON (shown with some fields omitted that are not important here):
{
"processedStatus": "COMPLETE",
"etaresult": {
"vehiclePlans": [
{
"vehicleId": "v0",
"plannedStops": [
{
... time estimates for the 1st stop
"stopId": "s2",
"timeEstimates": {
"arrival": "2001-01-01T08:12:46.552017856",
"start": "2001-01-01T08:12:46.552017856",
"complete": "2001-01-01T08:12:46.552017856"
}
},
{
... time estimates for the 2nd stop
},
{
... time estimates for the 3rd stop
}
],
"planEndPoint": {
... this is vehicle end time
"time": "2001-01-01T08:35:13.883584418",
},
"timeStatistics": {
... travel statistics for the plan are shown here
},
"timeStatisticsInclDispatchedIncomplete": {
... travel statistics for the plan + dispatched incomplete stops
},
"dispatchedIncompleteStops": [
... time estimates for dispatched incomplete stops
],
}
]
}
}
If you have violated constraints in your plan - for example you have added more jobs to the vehicle than the capacity constraints allow, the violating stops will still be included in the plan returned by the query. By design, the return object shows you ETAs in a plan identical to the input plan you requested, regardless of whether that input plan breaks the rules or not.
12.2.2 Stateful queries and overriding stateful data
The ETA query works on the principle that the query JSON overrides top-level fields in the ‘dummy’ model and its configuration, then calculates ETAs for the current plan and any dispatched-incomplete stops. If you have an existing live model with its own job and vehicle data running on your ODL Live server, and then you send the following ETA query to it:
{
"queryType" : "ETA",
"etarequest" : {
"data" : {
},
"configuration" : {
}
}
}
you will just get the model’s current plan but with updated ETAs, because you haven’t overridden jobs, vehicles or the plan in the query. You can therefore get updated ETAs before the optimiser runs its next burst on your model. Leaving the data and configuration objects out entirely would also get you exactly the same result:
{
"queryType" : "ETA",
"etarequest" : {
}
}
Alternatively, you could use the existing jobs and vehicles in the live model but just override the plan to see what the ETAs would be like with a plan that’s different to the current plan on your live system:
{
"queryType" : "ETA",
"etarequest" : {
"plans" : [ {
"vehicleId" : "v0",
"stopIds" : [ "s2", "s0", "s1" ]
} ]
}
}
In this case, vehicle v0 and stops s0, s1 and s2 must exist in the live model already running on your server.
If you include any of the following fields in your ETA query, they will override the equivalent fields in your existing ‘dummy’ model, if you don’t include the fields, the original ‘dummy’ model fields will be used instead:
- etarequest.data.jobs overrides model.data.jobs in your live model, replacing all the jobs.
- etarequest.data.vehicles overrides model.data.vehicles in your live model, replacing all the vehicles.
- etarequest.data.plan overrides the plan in your live model.
- etarequest.configuration.distances overrides the distances settings in your live model. You should not include a road network graph in the overridden distances that is not already included in your existing live model though, otherwise ODL Live may load the graph just for your query and performance will be very slow. You can however do things like set the roadNetworkTimeMultiplier differently in the distances configuration in your query, to see how the ETAs change if travel times are faster/slower.
- etarequest.configuration.timeOverride overrides model.configuration.timeOverride in your live model.
12.2.3 Performance considerations
We expect the ETA query to take a second or at most a couple of seconds to process (assuming perhaps 100 stops max in the route) unless (a) there is a queue of queries waiting to be processed or (b) you have made the mistake of referencing a road network graph in the query (in etarequest.configuration.distances) that is not already referenced in a live model. If you send too many synchronous queries at once to a busy server and they end up queueing, it is possible you might hit a timeout. Internally to ODL Live we have a 2 minute timeout for synchronous queries which are queueing. Tomcat (webapp1) / Micronaut (webapp2) may have timeout settings for HTTP requests, as may your hosting server, load balancer etc.
There are various settings which affect the performance of the ETA queries and which can be used to speedup the queries. These are set application-wide in the application.yml file and are discussed separately in the installation documentation for self-hosting subscribers. The key settings are:
optimiser.threadScheduler.nbThreads
This is the number of threads available to the optimiser.
optimiser.threadScheduler.nbQueryOnlyThreads
This is the number of optimiser threads which are reserved for queries only. This setting is only used in webapp1.
optimiser.distancesCache.autoPurge.maxLifetimeMillis4LowPriorityObjs
This is the maximum lifetime that travel information for latitudes and longitudes not found in a live model will stay in the distances cache. By default it is 1 minute, so if you query for ETAs for stops not found in the live model (i.e. a stateless query) and then you run the same query again less than a minute later (and the distances cache hasn’t been emptied because it became too full), the second query will be a lot faster as the travel information is still cached. This is useful if you’re requerying often when a vehicle GPS location changes, but its assigned stops haven’t changed.
12.3 Hard time windows valid range queries
ODL Live supports a query type called HARD_TW_RANGE which is designed for a model the only uses hard time windows (i.e. no late time or custom arrival time penalties), and allows you to query for the valid time window range where a new job can fit in. Imagine we’ve got a new job j and we want it to be served in-between 10:00 and 10:10. We can send a HARD_TW_RANGE query to ODL Live, which will test if the job can be served then, and if it cannot, ODL Live will find the closest time windows both before and after when it can be served. This logic works slightly differently depending on whether you have a job with one stop or two stops:
1 stop job (e.g. delivery from depot or field service technician). The query will modify the single stop’s time windows to see when it can fit in.
2 stop jobs (e.g. pickup and then dropoff a passenger). The query will modify the first stop’s time windows (i.e. the pickup stop’s time windows) but the second stop’s time windows will not be modified.
Four different variables can be set to control the search for a valid time window:
lowerLimitMillis. This is the maximum number of milliseconds before the initial time window’s openTime that the query will search back for. For example, if you’re only interested in time windows no more than 3 hours before the current window, set lowerLimitMillis = 1000 × 60 × 60 × 3.
upperLimitMillis. This is the upper limit of milliseconds after the initial time window’s closeTime that the query will search forward for. Use this if for example, you want a window no more than 3 hours after the current window.
searchToleranceMillis. This controls the search how accurate the search is, in milliseconds. Try setting to 1 millisecond and increase this if you find the query is too slow.
vehicleFilter (optional). This is an optional array of strings that you can use to filter for specific vehicles - e.g.
Set vehicleFilter=[“vehicle1”] if you only want to test a vehicle with id vehicle1.
Set vehicleFilter=[“vehicle1”,“vehicle2”] if you only want to test vehicles with ids vehicle1 or vehicle2.
Do not set this field (leave out of the JSON) if you want to test all vehicles.
The exact logic used is as follows:
ODL Live receives the query job and checks it format for errors (e.g. an error is thrown if the job has lateTime set).
ODL Live checks if the initial time window, defined by the stop’s openTime and closeTime fields, can fit into the available vehicles/routes.
Yes processing stops.
No ODL Live searches for earlier and later valid time windows.
Earlier search. ODL Live searches for an earlier time window that fits in. If the initial time window is (openTime, closeTime) then ODL Live searches for a new time window (openTime′, closeTime′). (openTime′, closeTime′) has the same width as the original window (i.e. where width is defined as closeTime − openTime). We also ensure that the width is at least 2 × searchToleranceMillis. The centre c′ of the new time window (where c′ = (openTime′ + closeTime′)/2) is the latest possible time in the range openTime − lowerLimitMillis ≤ c′ ≤ openTime, to within searchToleranceMillis accuracy. We set the centre as the latest possible time found (and not openTime′ instead), as this results in a window that is easier to still serve when further jobs are added.
Later search. The logic for the later search is identical to earlier but reversed. The centre c′ of the new time window is the earliest possible time in the range closeTime ≤ c′ ≤ closeTime + upperLimitMillis, to within searchToleranceMillis accuracy.
12.3.1 Walkthough
This walkthrough will only work correctly with ODL Live 1.3.1 or later.
The following walkthrough uses example JSON files which for self-hosting ODL Live subscribers are available in the directory:
supporting-data-for-docs\example-models\hard-tw-range-query-passengers-example\hard-tw-range-query-passengers-example.zip
For hosted subscribers, please contact Open Door Logistics directly for the files.
The file near-full-model.json contains a starting model with 2 vehicles and 5 passenger transportation jobs, with tight time windows. To use this model, you will first need to edit the JSON and update the paths of the road network graph and traffic learner model:
Set JSON field model.configuration.distances.graphDirectory to the directory of the unzipped road network graph, found in supporting-data-for-docs\brooklyn-nyc-traffic-model. You will need to unzip the graph first and remember that in JSON the backslash in a Windows path must be a double backslash (i.e. “C:\\supporting-data-for-docs\\brooklyn-nyc-traffic-model” not “C:\supporting-data-for-docs\brooklyn-nyc-traffic-model”)
Set JSON field model.configuration.distances.learnerFilename to the traffic learner model file supporting-data-for-docs\brooklyn-nyc-traffic-model\brooklyn-traffic-learner-model, remembering to use \\ not \.
After editing this JSON use Postman (or a similar tool) to PUT the model JSON to the following URL on your ODL Live server:
PUT ../models/hardTWRange
As the 2 vehicles in this model are only operating for a couple of hours, and they can only carry 2 jobs at once, this model is already semi-full. It can fit some new jobs in, but not all new jobs and most cannot fit at their original target time window. The model is specially constructed to work with HARD_TW_RANGE queries. All vehicles and jobs in the model use only hard time windows - they have openTime and closeTime fields set but don’t set the ‘soft time window’ fields lateTime or multiTWs. This kind of hard time window only setup is appropriate for next day planning but not same-day / realtime planning. Only hard time windows can be used with HARD_TW_RANGE queries, you should not use soft time windows. For this scenario we have narrow time windows (the difference between openTime and closeTime is only 15 minutes) but crucially we still have a gap between openTime and closeTime. You should never set openTime equal to closeTime (i.e. zero duration time window), as the job will not be served. Have at least one minute duration (e.g. closeTime = openTime + 1 minute) and ideally longer.
Once you’ve PUT this model, you should be able to inspect the two planned routes in the ODL Live dashboard. Next we run a HARD_TW_RANGE query for a new job against this loaded model. The supporting-data-for-docs\example-models\hard-tw-range-query-passengers-example directory contains 10 different example hard time window range queries you can run, from example00 to example09, each containing a different new job with a different target time. Each example has the following files:
exampleXX-step1-QUERY-(YYYYY).json - original query JSON which you POST to the server.
exampleXX-step2-RESULT-(YYYYY)-result.json - expected result for the query, returned from ODL Live.
exampleXX-step3-JOB-TO-ADD-tws-ORIGINAL.json - job to add with job acceptance request if original time window was OK.
exampleXX-step3-JOB-TO-ADD-tws-AFTER.json - job to add with job acceptance request if we found a valid time window before.
exampleXX-step3-JOB-TO-ADD-tws-BEFORE.json - job to add with job acceptance request if we found a valid time window after.
Where XX is the number of the example (e.g. ‘03’), and YYYYY is the expected result from the query - either ‘original-ok’, ‘before-and-after-ok’, ‘only-before-ok’, ‘only-after-ok’ or ‘failed’ if no time window was found.
Open the JSON file example01-step1-QUERY-(only-after-ok).json in your favourite text editor and inspect it:
{
"jobs": [
{
"quantities": [1 ],
"onboardTimePenalty": {
"type": "FROM_LEAVE_LOC",
"directTravelTimeBased": {
"multiplyDirectTimeLimitBy": 1.5,
"addHours2DirectTime": 0.16666666666666666,
"type": "HARD_LIMIT"
}
},
"stops": [
{
"type": "SHIPMENT_PICKUP",
"durationMillis": 180000,
"coordinate": {
"latitude": 40.59883499145508,"longitude": -73.93753051757812
},
"openTime": "2019-07-26T09:19",
"closeTime": "2019-07-26T09:34",
"_id": "Job6P"
},
{
"type": "SHIPMENT_DELIVERY",
"durationMillis": 180000,
"coordinate": {
"latitude": 40.64527130126953,"longitude": -74.01952362060547
},
"_id": "Job6D"
}
],
"_id": "Job6",
}
],
"queryType": "HARD_TW_RANGE",
"keepPlan" : true,
"alwaysStoreQueryResult" : true,
"hardTWRangeRequest": {
"lowerLimitMillis": 7200000,
"upperLimitMillis": 7200000,
"searchToleranceMillis": 1
}
}
The most important fields in this JSON query object are:
jobs. This is the jobs array and for a HARD_TW_RANGE query it should only contain a single job element (otherwise an error will be thrown).
This job is a pickup-dropoff type job used in passenger transportation, so it has 2 stops.
It also has a maximum on-board time limit based on the travel time from the pickup to dropoff location.
The target (i.e. original) time window is stored in jobs[0].stops[0].openTime and jobs[0].stops[0].closeTime. This is a hard time only as lateTime is not set (setting lateTime or multiTWs will result in the query returning an error as these are not supported for the query type).
queryType. The type of planning query we’re doing, it should be set to HARD_TW_RANGE.
hardTWRangeRequest.lowerLimitMillis. How far to search back before the original time window for a valid time, in milliseconds.
hardTWRangeRequest.upperLimitMillis. How far to search forward after the original time window for a valid time, in milliseconds.
hardTWRangeRequest.searchToleranceMillis. The tolerance of the search in milliseconds. Try setting to 1 and increase if the query is too slow.
keepPlan and alwaysStoreQueryResult. Both these fields should be true. These fields tell ODL Live that the plan (routes with the new job) should be included in the query result object and that even when we process the query synchronously using a POST (see below), ODL Live should still store the query result. ODL Live can store the results of recent queries in-memory so the plan can be used in the job acceptance rejection conversation. If you do not store the query result with the plan (i.e. don’t set both these fields to true), there are edge cases where ODL Live may fail to accept the job when you add it permanently to the system, in the job acceptance/rejection phase. The saved query results are automatically cleared out after a while, so the client code does not need to delete them.
Now POST the contents of example01-step1-QUERY-(only-after-ok).json to the following URL (ensuring you change PUT to POST in Postman otherwise you will get a method not allowed result):
POST ../models/hardTWRange/queries/synchronous/
The POST should return the following JSON (where we’ve summarised some of the more detailed information):
{
"query": {
... original query object
},
"processedStatus": "COMPLETE",
"textDescription": "The initial time window could not be served but a valid time was found later",
"processingStats": {},
"hardTWRangeResult": {
"after": {
"job": {
... This is the 'after' version of the job that was found
"stops": [
{
... time window ODL Live has selected
"openTime": "2019-07-26T09:34:03.515625",
"closeTime": "2019-07-26T09:49:03.515625",
},
{....
}
],
"_id": "Job6",
},
"selected": {
... selected vehicle object
},
"plan": {
... plan with query job added
},
"textDescription": "This is the valid insertion found after the initial time window. The new time window is stored on the job."
}
},
... This id has been assigned to the result by ODL Live
"_id": "30Q8wT12QTKlSUR_VIVoRA==",
}
For this query, the original time window could not be served and no time window could be found earlier than it either, so only a time window after is available. The different results for using the original time window, a time window before and one after are found in the fields:
hardTWRangeResult.original
hardTWRangeResult.before
hardTWRangeResult.after
The respective field is omitted from the JSON if no result could be found for it (i.e. there is no field hardTWRangeResult.before if no time window before the original time was found). A before or after result will only be available if the original result didn’t fit in. If no valid time window was found, no results will be available.
For the after result we found, the following fields are the most important:
hardTWRangeResult.after.job - this is the version of the job containing the new selected time window that was chosen. The time window is stored in job.stops[0].openTime and job.stops[0].closeTime. This is the version of the job you should then PUT to the system.
hardTWRangeResult.selected - the vehicle that was selected.
hardTWRangeResult.plan - the optimiser plan (including stop arrival times) with the new job added.
The top-level id is also important - if you POST a synchronous query ODL Live will assign a unique ID for this query result. You need to use this result’s id in the job acceptance/rejection phase.
If the original or before results are available, their fields follow the same pattern as this after example. If you try a few of the different example queries and inspect the results they return, you will see some have original, before, after or no results depending on the job.
After getting a valid time window for the job, typically the result would then be presented back to a human user (depending on your use case), and assuming the human user is OK with the selected time window, you will then PUT the job to the model. When you PUT the job to model, be sure to use the correct time window from the correct result - either the openTime and closeTime from the version of the job in result.hardTWRangeResult.original.job, or from result.hardTWRangeResult.before.job or from result.hardTWRangeResult.after.job.
When you PUT the job to the model you should follow the job acceptance conversation as you would for normal planning queries (see appointment booking section), otherwise you risk double-booking if another job is added at the same time. The following JSON shows the structure of the job to be added, including the acceptanceRequest object within the job:
{
"quantities" : [ 1 ],
"onboardTimePenalty" : {
....
},
"breakAllowedBetweenStops" : true,
"stops" : [
....
],
"acceptanceRequest" : {
"tolerances" : [ ],
"planningQueryId" : "30Q8wT12QTKlSUR_VIVoRA=="
},
"_id" : "Job6",
}
Crucially, the field job.acceptanceRequest.planningQueryId should be set to the id field in the result that was returned by ODL Live. This allows ODL Live to access the plan object from the query result when it processes the job acceptance/rejection phase. This is important, let’s imagine we query the model when it has a current set of planned routes A. The query find a valid insertion position for the new job in the set of routes A. Now let’s imagine that after the query is processed but before the job acceptance/rejection phase, ODL Live finds an even better set of planned routes B, that is a lot more efficient than A, however routes B can no longer fit in the query job easily. As a result the job acceptance/rejection phase can theoretically fail to insert the new job, though this would be rare. However, if the planningQueryId field has been set properly to reference the original plan in the stored result object, then ODL Live can still revert back to plan A if this loads more jobs, meaning the job is still accepted.
12.4 Travel matrix queries
If you want to generate a large travel matrix, you are advised to do it using the matrix command line and not the webservice matrix query detailed here.
Using the planning queries API (see appointment booking walkthrough and taxi auctions) you can query for a travel matrix. Internally to ODL Live the matrix query is dealt with asynchronously to avoid overloading the CPUs. However you can make a synchronous call (as per the taxi auctions section) and then internally to ODL Live the HTTP request is kept open until the matrix is calculated, so the client only has to make a single HTTP call (i.e. a single synchronous call). The synchronous planning queries call is:
POST my-base-URL/models/my-model-id/queries/synchronous/
When you call a planning query for a matrix, you must call it against a specific model (defined by the model id in the URL) as the model holds the distances configuration.
You are advised to use the matrix calculation function for smaller matrices - e.g. thousands of A to B elements but not tens of thousands. The matrix result is held temporarily in-memory (as a zipped JSON file) and so very large matrices might negatively impact on the system. If you use the synchronous call there is a default 2 minute timeout internally to ODL Live, resulting in a “REQUEST TIMEOUT” 408 HTTP response if the matrix is not calculated within this time (which is very unlikely but theoretically possible if you have many matrix calls concurrently, or many other planning queries concurrently, or many models which are very far behind with running optimisation bursts). In contrast, there is a default 10 minute timeout on the asynchronous call (see appointment booking walkthough for example). The synchronous call deletes the stored in-memory matrix result as soon as it sends back the HTTP response (though the A-to-B results will stay in the global distances cache for longer, making it quicker to re-query). The asynchronous call leaves the matrix result in-memory, though query results are by default deleted from memory if they’re present for more than an hour, or the total number of stored query results exceeds 1000 results, or 256 MB in memory.
You are advised to try using the synchronous call first. The following JSON defines a planning query which requests a matrix:
{
"queryType" : "MATRIX",
"matrixRequest" : {
"froms" : [ {
"latitude" : 51.5416,
"longitude" : -0.1462,
"id" : "CMD"
}, {
"latitude" : 51.511892,
"longitude" : -0.123313,
"id" : "COV"
} ],
"tos" : [ {
"latitude" : 51.4998,
"longitude" : -0.1252,
"id" : "PARL"
}, {
"latitude" : 51.5073,
"longitude" : -0.1657,
"id" : "HYDE"
} ],
"timesUTC" : [ "2019-01-01T06:00:00", "2019-01-01T18:00:00" ],
"distanceProfileIds" : [ "", "fast" ],
"includeDest2Origins" : true
}
}
The queryType must be MATRIX and the matrix request information is stored in a sub-object called matrixRequest. This sub-object has two arrays of locations - froms and tos. Each location object has 3 fields:
{
"latitude" : 51.5073,
"longitude" : -0.1657,
"id" : "HYDE"
}
For this JSON example we are assuming a model exists which contains both a default distances profile and an alternative distances profile called “fast”. The following line in the JSON:
"distanceProfileIds" : [ "", "fast" ],
indicates we want the matrix calculated for both the default profile (indicated by the empty string ““) and the”fast” profile. If you only use the default profile (i.e. don’t have multiple modes of transport like bike, car, foot etc) then don’t include the field distanceProfileIds in the JSON. If you query for a distances profile that doesn’t exist, the return object will have processedStatus equal to ERROR and should have relevant information in the errorMessage field.
The timesUTC array contains the departure times we want to query for:
"timesUTC" : [ "2019-01-01T06:00:00", "2019-01-01T18:00:00" ],
If you omit this field a default time will be used.
The field includeDest2Origins is also set to true. If this is set to true, the matrix calculation will also swap around the froms and tos and calculate from all tos to all froms. So, if you want to query the travel times both to and from a single location to many locations, put that single location in the tos, put the many locations in the froms and set includeDest2Origins = true. If includeDest2Origins is omitted, by default it is false. For this example JSON we are therefore requesting 4 matrices (two different times × two different profiles) which have |froms|×|to|×2 elements.
The JSON returned will contain several fields which are used for the general planning queries function and can be ignored for matrix calculations. The relevant fields are shown in the following JSON:
{
"processedStatus": "COMPLETE",
"matrix": [
{
"time": "2019-01-01T06:00",
"rows": [
{
"f": "CMD",
"t": "PARL",
"m": 4870,
"s": 243
},
{
"f": "CMD",
"t": "HYDE",
"m": 4046,
"s": 202
},
...
]
},{
"time": "2019-01-01T06:00",
"distancesProfileId": "fast",
"rows": [
...
]
},{
"time": "2019-01-01T18:00",
"rows": [
...
]
},{
"time": "2019-01-01T18:00",
"distancesProfileId": "fast",
"rows": [
...
]
}
]
}
The array matrix has an object for each distanceProfileId and departure time combination, each of which has an array rows:
{
"time": "2019-01-01T18:00",
"distancesProfileId": "fast",
"rows": [
...
]
}
If the object refers to the main distances profile, the field distancesProfileId is omitted. The array rows contains the A to B results in a compact form:
{
"f": "CMD",
"t": "HYDE",
"m": 4046,
"s": 202
}
where f is the from id, t is the to id, m is the distance in integer metres and s is the time in integer seconds. The field names are single-letter to keep the size of the JSON small.
12.4.1 Speed up queries using a dummy model
If you have locations (latitudes-longitudes) that you’re commonly including in matrix queries, you can speed up the queries massively by creating dummy data in the model you query, which is designed to keep those locations held within the global distances cache.
Say you’re always querying between a new location and 100 fixed known locations. You then create a model which references those known locations in an appropriate manner and the pre-processed forward and backwards data for those 100 locations will then be held in the global distances cache. Your queries will then run much faster.
The best way to do this is with a model where all jobs are dispatched but not complete, so the optimiser will have to calculate the distances to calculate the state of the vehicles, but will not waste any time actually optimising the model. Note that the lat-long values must match precisely between your matrix request and dummy model data for this to work correctly (e.g. 51.511892 and 51.511893 will be considered different).
The following dummy model JSON demonstrates this for 2 latitudes-longitudes that we want to cache:
{
"data": {
"jobs": [{
"stops": [{
"type": "DELIVER",
"coordinate": {
"latitude": 51.511892,
"longitude": -0.123313
},
"_id": "Dummy1"
}],
"_id": "Dummy1"
},{
"stops": [{
"type": "DELIVER",
"coordinate": {
"latitude": 51.5416,
"longitude": -0.1462
},
"_id": "Dummy2"
}],
"_id": "Dummy2"
}],
"vehicles": [{
"definition": {
"start": {
"type": "START_AT_DEPOT",
"coordinate": {
"latitude": 51.5073,
"longitude": -0.1657
},
"openTime": "2019-01-01T00:00",
"_id": "Dummy1Start"
},
"end": {
"type": "RETURN_TO_DEPOT",
"coordinate": {
"latitude": 51.5073,
"longitude": -0.1657
},
"closeTime": "2020-01-01T00:00",
"_id": "Dummy1End"
}
},
"dispatches": [{
"stopId": "Dummy1"
}],
"_id": "Dummy1"
},{
"definition": {
"start": {
"type": "START_AT_DEPOT",
"coordinate": {
"latitude": 51.5073,
"longitude": -0.1657
},
"openTime": "2019-01-01T00:00",
"_id": "Dummy2Start"
},
"end": {
"type": "RETURN_TO_DEPOT",
"coordinate": {
"latitude": 51.5073,
"longitude": -0.1657
},
"closeTime": "2020-01-01T00:00",
"_id": "Dummy2End"
}
},
"dispatches": [{
"stopId": "Dummy2"
}
],
"_id": "Dummy2"
}]
},
"configuration": {
"distances": {
"useRoadNetwork": false,
"straightLineSpeedMetresPerSec": 22.352,
"straightLineDistanceMultiplier": 1
},
"timeOverride": {
"override": "2019-01-01T00:00",
"overrideType": "SCHEDULER"
}
},
"_id": "MyModel"
}
In this model:
We have overrided the current time in model.configuration.timeOverride. We have set the vehicle start times to this same time. We therefore don’t need to worry about the model behaviour changing because the real world time is changing.
We have 2 dummy jobs (“Dummy1” and “Dummy2”) containing a single stop each with the locations we want to cache.
We have 2 dummy vehicles (“Dummy1” and “Dummy2”) corresponding to the 2 dummy jobs. The vehicles’ start and end location is different to the two locations we want to cache. We have chosen an arbitrary nearby third location for this (this location must be valid - it can’t be in the middle of the ocean for example).
Each dummy vehicle has its corresponding dummy job dispatched to it (in the dispatches array). As a result, to reconstruct the state of each vehicle the optimiser has to calculate travel from the vehicle’s base to the location we want to cache and then back again to the base. Therefore the forward and back data for the target locations are calculated and cached.
As all stops are dispatched, the optimiser will not spend anytime doing optimisation work on this model, it will only check the travel data is available.
12.5 Taxi auctions walkthrough
ODL Live can be configured to support taxi auctions, where a new job is sent to multiple taxi drivers at once (e.g. the 10 ‘closest’ drivers) and awarded to the first driver who accepts it. The taxi drivers are assumed to be independent contractors who can accept / reject jobs as they wish.
The number of live taxis you can model at once is strongly dependent on (a) how you use ODL Live and (b) the amount of memory available on your ODL Live server instance. Although you could use ODL Live to model ride sharing with taxi auctions - where you use skills to lock down accepted jobs to drivers but don’t dispatch them - in the current version of ODL Live this would only scale to a couple of hundred live drivers and jobs at once. Without ride sharing the current version can scale to thousands of drivers. Please contact Open Door Logistics if you would like to use ODL Live for taxi ride sharing on larger problems - the upgrade to support this is relatively minor and is in our development roadmap.
12.5.1 Basic operation
For the following scenario we assume no ride sharing and therefore jobs are only held in the ODL model once they have been dispatched - i.e. no undispatched jobs should be held in the ODL model. The scenario works as follows:
Create the model
Repeat the following:
Keep taxi GPS coordinates updated, updating in batches.
Use a planning query to get the best X drivers for a new job. Note this doesn’t add the job to the model - the job only exists temporarily in the query.
Offer the job to the X drivers selected.
Once a driver accepts, use a HTTP PATCH request to add the new job to the model and add the dispatch objects to the vehicle record at the same time.
New jobs are therefore not added to the model until they are dispatched. We examine these steps in detail in the following sections.
12.5.1.1 Creating the taxi auctions model
This is just a standard ODL Live model, with the same configuration as your other models and your normal defaults for lateness penalties.
12.5.1.2 Updating GPS coords
In the taxi auctions scenario - especially when you have thousands of vehicles - the primary role of ODL Live is to support very fast queries using road-network based travel times to determine (a) when drivers currently on jobs will be free next and (b) what time they can arrive at a new job’s pickup location. The accuracy of these road network travel times is dependent on the amount of time you’ve spent configuring them - if you use speed regions and rush hour modelling they will be much more accurate.
ODL Live uses sophisticated algorithmic techniques to perform these fast road network queries. Whenever ODL Live receives new GPS coordinates or a new job location, it must do processing to maintain its internal state, necessary for fast query processing. For example, if you update GPS trace locations for 500 different vehicles, ODL Live might have one second’s processing time to do before it can serve any new planning queries. If you have 5000 vehicles and send new GPS coordinates for all 5000 at once, ODL Live might be locked up for 10 seconds before it can process any queries for new jobs.
You should therefore (a) limit how often GPS coordinates are updated for vehicles and (b) batch updates - for example sending new GPS coordinates for 10% of vehicles at once. As a starting point, we would recommend a maximum of 500 new GPS coordinates at-a-time, and doing this every couple of seconds. If you encounter optimiser performance issues then ease back on the update rate. Writes to a single model are queued on the server-side, so there is no benefit in sending another another update query before the last one has finished.
To update the GPS coordinates you are advised to use the PATCH endpoint - see the PATCH section for more details.
12.5.1.3 Planning query for a new job
The following JSON defines a planning query for a new job.
{
"jobs" : [ {
"stops" : [ {
"coordinate" : {
"latitude" : 40.71202973339141,
"longitude" : -73.39475562617531
},
"lateTime" : "2017-01-01T01:01:23",
"type" : "SHIPMENT_PICKUP",
"durationMillis" : 0,
"_id" : "P22"
}, {
"coordinate" : {
"latitude" : 40.76215422473169,
"longitude" : -73.8956092881875
},
"lateTime" : "2017-01-01T01:01:23",
"type" : "SHIPMENT_DELIVERY",
"durationMillis" : 0,
"_id" : "D22"
} ],
"quantities" : [ 1 ],
"_id" : "J22",
} ],
"queryType" : "ANALYSE_NEW_JOB_ASSIGNMENT",
"resultsLimit" : 50,
}
In contrast to the planning queries detailed in the planning queries walkthrough chapter, this query has a type ANALYSE_NEW_JOB_ASSIGNMENT. This type tells ODL Live to try adding the job to each vehicle and get the change in optimiser cost. This uses the same cost model as the rest of ODL Live. You could therefore set up the query in a number of different ways:
To order results by earliest arrival at the pickup location first, set the pickup late time to the current time. As lateness cost should be set to dominate all other costs, a late time of the current time should result in ‘earliest arrival first’ ordering.
To order results by least driving time (i.e. physically closest once any current dispatches are finished), set a much later late time.
The ids of the job and its stops should not already exist in the model or an error will occur. The resultsLimit dictates how many vehicles should be included in the results - e.g. top 10, top 50 etc…
Planning queries can be sent either synchronously or asynchronously. If you send synchronously you get the query result from the HTTP request’s response. If you send asynchronously, you post the query and have to repoll later for the result. The planning queries walkthrough chapter details sending queries asynchronously. If queries are taking a long time to process it may be beneficial to use asynchronous processing so you don’t have to worry about HTTP request timeouts. We describe synchronous queries - the simpler type - here. We recommend synchronous queries as a first approach. You should POST the JSON for your query to:
POST my-base-URL/models/my-model-id/queries/synchronous/
You must have the forward slash on the end of your URL or you will get an error..
For a thousand or two thousand live taxis, setup as described here, the query should take a second or two to return a response. Queries are processed more-or-less one-at-a-time within a single ODL Live model, so if the query takes one second to process and you send two at exactly the same second, one will end up waiting two seconds to complete. Similarly if you’re sending queries quicker than the system can process them, a queue of queries will develop on the server and queries will take a longer time to get a response. You should monitor for this situation; if it occurs you may need to upgrade the hardware or split your model into multiple models to enable more parallel processing.
The response JSON object has the following form:
{
"query" : {
... original planning query JSON
},
"costSortedResults" : [
{
"selected" : {
... json of the best selected vehicle, including its id...,
},
"plan" : {
... optimiser plan (selected route only), with arrival times
},
},
{
"selected" : {
... json of the second best selected vehicle, including its id...,
},
"plan" : {
... optimiser plan (selected route only)
},
}
],
"textDescription" : "Useful debug string summarising the assignments",
}
The costSortedResults array contains the vehicles, up to the requested limit, ordered by least cost first. The selected object in each element of this array contains the original vehicle record, including its id. Within your own system, you should then send the job to the drivers corresponding to the vehicle records in the costSortedResults array.
12.5.1.4 Patch to add new job and update vehicle dispatches
When a driver has accepted the job, you should tell the ODL Live system, so it can update the driver’s state. You should send a PATCH with the new job and the updated vehicle object. This will have the form:
{
"jobs" : [
{
... JSON of new job including its id ...
}
],
"vehicles" : [
{
... JSON of updated vehicle ....
}
]
}
where the vehicle object should have two extra stops in its dispatches array corresponding to the pickup and delivery stops. Dispatch objects are described in the chapter on real-time planning.
It is important that you send this update with the job and vehicle together. The vehicle object references the job object (via its dispatches) and sending the two separately (e.g. in two different HTTP requests) could result in a hanging reference or the system being in an invalid state.