WorkManager for Background Work in Libraries

WorkManager for Background Work in Libraries

Making it easy on applications by cleanly sharing the Singleton

WorkManager is an Android Jetpack library for deferrable, reliable background work. In other words, the work does not need to be run immediately, but it must be run reliably (even after the app process restarts), and it is usually run off the main thread. WorkManager is great because it can leverage Android system knowledge such as battery levels and the current type of network access combined with awareness of other background work being done to optimize the timing and minimize the resource usage of the work within your set constraints.

Using WorkManager in your application is simple, as the Getting Started guide suggests:

  1. Define the work: a Worker.
  2. Create a WorkRequest with particular Constraints for when it is run, and a set of input Data.
  3. Submit the WorkRequest to WorkManager.

It gets a bit more complicated when your Workers have custom dependencies that need to be injected in some way beyond the primitives passed in by the input Data. Thankfully there are many helpful guides for how to use a WorkerFactory to construct a Worker with other dependencies, as well as to properly initialize the singleton WorkManager at application startup so WorkManager knows where and how to access that WorkerFactory!

The real problem shows up when you want to use the WorkManager singleton from a library: viz. there can only be one WorkerFactory provided to the Configuration at the time of custom initialization. This can be a DelegatingWorkerFactory which delegates to other factories, but all factories still need to register with the single delegator. At Square we had this exact problem for our logging client library.

The logging client is used by all of our mobile applications to send analytics and debug logs to our servers. The library code for the client handles on disk persistence of the logs as well as, you guessed it, _deferrable background work _to upload them.

Back to the problem; a single DelegatingWorkerFactory (or your own implementation of such) can be used to delegate to any number of other factories but that doesn’t solve the root problem for our library: the WorkerFactorys that the library uses would need to be provided to the application’s DelegatingWorkerFactory before that factory is provided to the WorkManager initialization.

That kind of complex timing handshake as a requirement for use of our library is highly undesirable. The problem is worse if the library is open source or if it is an SDK, such as the SDK for our Square readers. In that case, asking 3rd party application developers to include this initialization handshake is unacceptable. The timing could be stretched out with kotlin.Lazy or dagger.Lazy usage but this does not reduce the initialization complexity put upon the application developers.

Another approach for an application, detailed in this blog post, would be to use ‘on-demand’ initialization by having the Application class implement Configuration.Provider and provide the configuration on demand. Then, for example, using Dagger or Dagger + Hilt (in the case of a HiltWorkerFactory), one could guarantee the right dependency construction order by virtue of the Dagger graph, as the @Provides for the Configuration would inject the DelegatingWorkerFactory which would in turn would inject all the dependencies required by the individual WorkerFactorys.

Where does that solution leave us? Our library would require any client application to:

  1. Explicitly depend on the WorkManager library.
  2. Have their Application class implement Configuration.Provider.
  3. Ensure the initialization timing is correct or use Dagger (or Dagger + Hilt).

Not a single one of these requirements, let alone all three together are acceptable costs for using our logging library or our reader SDK. We want to support applications which already use WorkManager as well as those that do not.

Unsatisfied with the available options, we reached out to Google’s WorkManager team. In this discussion we were able to better understand the heart of the problem: we wanted our library’s Workers to both a) be able to be constructed without work on the application’s part and b) to have injected custom dependencies. Further, we wanted the use of our library to implicitly initialize WorkManager when not used by the application but not to duplicate the initialization when it was already used by the application (causing a runtime exception).

The pattern provided in the support thread as the solution takes advantage of a super power of WorkManager: if the custom WorkerFactory returns null for a particular Worker signature, or if there is no custom WorkerFactory, then the default factory will be invoked which can construct a Worker based on the class name (via reflection), as long as that Worker has no custom dependencies. The WorkManager team pointed us towards the ConstraintTrackingWorker used within WorkManager itself and part of its open source library. This Worker has no custom dependencies and so can be constructed via reflection by the default factory but it delegates its substantive work to another Worker.

What does this mean? We could use a Worker such as this, which we call a RouterWorker, as an entry point to the library’s use of WorkManager without requiring anything on the part of the application. Within library code, WorkRequests corresponding to any type of Worker the library’s work involves can be requested via the RouterWorker by adding the class name of the delegate Worker into the input Data of the request, as well as possibly an identifying ID if there are multiple such Workers. The request looks like the following:

 val delegatedUploadWorkData = Data.Builder()
   .putString(WORKER_CLASS_NAME, ActualUploadWorker::class.qualifiedName)
   .putString(WORKER_ID, workerId).build()

 workManager.enqueueUniqueWork(
   "$workerId-DelegatedWork",
   KEEP,
   OneTimeWorkRequestBuilder<RouterWorker>()
     .setConstraints(Constraints.Builder()
         .setRequiredNetworkType(CONNECTED).build())
     .setInputData(delegatedUploadWorkData)
     .build()
 )

The request is made for the RouterWorker, so regardless of the application’s configuration this Worker can be constructed. The RouterWorker itself looks like this:

/**
* A worker to route requests from within the library from a single location.
*/
@Keep
class RouterWorker constructor(
 appContext: Context,
 parameters: WorkerParameters,
) : ListenableWorker(appContext, parameters) {

 private val workerClassName =
   parameters.inputData.getString(WORKER_CLASS_NAME) ?: ""
 private val workerId = parameters.inputData.getString(WORKER_ID)
 // See below for the definition of this map of factories.
 private val delegateWorkerFactory = workerFactories[workerId]
 private val delegateWorker =
   delegateWorkerFactory?.createWorker(appContext, workerClassName, parameters)

 /**
  * Whether or not this should crash if there is no appropriate [delegateWorker] is based on
  * whether or not the "work" needs to be guaranteed. In the case of our library, this work
  * will either be picked up by a different background job (from persistent storage) or it is
  * no longer needed.
  */
 override fun startWork(): ListenableFuture<Result> {
   return if (delegateWorker != null) {
     delegateWorker.startWork()
   } else {
     // This would be the case to crash if this were mission critical.

     val errorMessage = "No delegateWorker available for $workerId" +
       " with workerClassName of $workerClassName. Is the " +
       "RouterWorker.workerFactories populated correctly?"

     Log.w("RouterWorker", errorMessage)

     val errorData = Data.Builder().putString("Reason", errorMessage).build()

     NoDelegateRouterFailedWorkFuture(errorData)
   }
 }

 override fun onStopped() {
   super.onStopped()
   delegateWorker?.onStopped()
 }
}

After construction, the RouterWorker looks up the appropriate WorkerFactory given the input Data and uses this to construct the delegate Worker. After that it is simply a matter of forwarding the request along. Since startWork() returns a ListenableFuture&lt;Result> we will only be able to wait for the Result at this point. More plumbing would be necessary to monitor the progress of the delegate Worker if it was long-running. Exercise for the reader: is this possible?

There are multiple methods that can be used for constructing the delegate Worker via the correct custom WorkerFactory. In our case a singleton map of factories is used (singletons beget more singletons, but alas).

companion object {
   // We hold a shared state Singleton here of WorkerFactories so that we can keep the Worker
   // construction logic all within our library. Modified only on the main thread.
   val workerFactories = object : AbstractMutableMap<String, UploadWorkerFactory>() {

     private val backingWorkerMap = mutableMapOf<String, UploadWorkerFactory>()

     override fun put(key: String, value: UploadWorkerFactory): UploadWorkerFactory? {
       confineToMainThread()
       return backingWorkerMap.put(key, value)
     }

     override val entries: MutableSet<MutableEntry<String, UploadWorkerFactory>>
       get() = backingWorkerMap.entries
   }
 }

Whatever method is chosen this has the effect of keeping all of the WorkManager code for the library within the library. Ultimately it is this pattern which has enabled us to move forward with the use of WorkManager in our libraries at Square. Hopefully it helps your library development as well!

View More Articles ›