When working in the DaaS tenant portal it could happen that some actions will go wrong. And then you are waiting for the activity to fail…
Unfortunately the activity cannot be cancelled but will timed out within 4 hours or so.
During my tests I was very impatient and was looking for a way to kill the task anyway.
My initial problem was that a RDSH would not join the domain, so in the desktone.log I saw lots of messages like:
2019-01-09 11:57:20,761 INFO [com.desktone.server.accessFabric.poolManager.PoolManagerMonitorJobBean]-[fabricScheduler_Worker-5] Processing task : ExpandSessionPoolTask ( PoolManagerTask ( com.desktone.dataModel.task.poolMgr.ExpandSessionPoolTask@d6267302 id = 201 poolId = 1018 taskId = null priority = 2 nextRetryTimestamp = null numFailures = 0 taskPercentage = 90 taskType = CloneSessionVM numberOfPowerOpsSuccess = 0 numberOfPowerOpsFailed = 0 totalPowerOpTasks = 0 timestamp = 2019-01-09 11:44:01.808 runningTask = RunningTask ( com.desktone.dataModel.task.poolMgr.RunningTask@995d92ef taskId = 3c69faf4-4f85-4af1-b43b-33951bbbecfb elementId = F28595CF0C hostId = 580eee05-8aea-4306-99c8-008d10e55e13 state = running stateDescription = Finished virtual machine ‘Apps103’
customization. Request to join the domain – Initiated taskStarttimeMillis = 2019-01-09 11:44:25.864 ) vmOrPatternIds = [] isLocked = false taskParams = {desktopModelId=543f15a3-20b9-40a2-82eb-094ee34097df, requested=5, isNewSessionHost=true, computePoolId=580eee05-8aea-4306-99c8-008d10e55e13, sessionHostId=4e3683d2-0730-45bd-a81c-e6463d19e43e} parentId = null ) )
Step 1
From the above message I took the taskId and queried the edb database
select * from t_task where id=’3c69faf4-4f85-4af1-b43b-33951bbbecfb‘;
There was my ‘running’ task…. Because the description was the same 🙂
Step 2
I updated the state column:
update t_task set state=’error’ where id=’3c69faf4-4f85-4af1-b43b-33951bbbecfb‘;
The result is that the specific task in the activity overview has status failed.
Unfortunately it was not enough, the VM was already created on VMware and was spamming the desktone.log… In the desktone.log there where lots of messages like:
2019-01-09 12:10:05,103 INFO [com.desktone.collector.swiftmq.JmsNotificationWorker]-[pool-27-thread-13] Sending bootstrap information to Daas agent on GM 35b4d745-4828-45e0-ac9f-17342c962135
2019-01-09 12:10:05,128 INFO [com.desktone.collector.swiftmq.JmsNotificationWorker]-[pool-27-thread-13] Successfully sent Daas Agent bootstrap credentials over JMS for GM 35b4d745-4828-45e0-ac9f-17342c962135
2019-01-09 12:10:06,068 ERROR [com.desktone.collector.swiftmq.JmsNotificationWorker]-[pool-27-thread-4] Exception caught whilst updating JMS sequence for gmid: 35b4d745-4828-45e0-ac9f-17342c962135 : com.desktone.core.dao.LockException: org.hibernate.PessimisticLockException: could not extract ResultSet
Step 3
The last step was to delete the VM from the t_general_machine table in the edb database.
delete from t_general_machine where id=’35b4d745-4828-45e0-ac9f-17342c962135‘;
Like I mentioned before the down side of this action is that the VM is already created in VMware and will not be deleted automatically. So you need to delete the VM(s) manually.
If you wait for the activity to time out then the VM(s) will be deleted as well.