Recently I’ve gotten a lot of questions about doing inventories in Evergreen because I’ve proposed, and am looking for funding partners for, a full fledged inventory component to Evergreen. I’ve heard folks complain about this missing for the past five years both inside and outside my consortium.
Inventories themselves can be complicated or not but follow a fairly simple recipe:
1) Find out what you have.
2) Compare it to what you should have.
3) Correct the exceptions.
Within that you can have a lot of diversity. What are your controls on what you have? Libraries have items moving around all the time so there are a number of variables to control for. And what level of correction do you perform? This can range from the simple to very, very complicated. Additionally, are you doing this in item buckets or via SQL? That determines a lot of the scale that you can perform operations on. What is your project management like and are there functions you want to have more segregated? All of these are things I think baked in functionality will help with.
Still, folks have managed to do their own inventories.
Indiana has a great detailed writeup of their process here:
There was a presentation also done at the 2012 conference but the site’s gone now and I can’t remember who it was: http://evergreen-ils.org/dokuwiki/doku.php?id=conference:2012
However, I get a lot of questions to the effect of “that’s all nice and good and the development would be nice but I need to do an inventory now and we need something simple.” So, here is the barebones I process I setup for an SCLENDS member library and how I helped a Windows based admin set it up via SQL. The admin can do it without SQL using smaller batches in buckets but there are several steps that are more difficult.
Mind you, that where this can get most complicated is in correcting for human error, the kind of thing that when it’s built into Evergreen we can have the computer do more of for us since at this point we have to manually tell tools to do these checks each time rather than have had a programmer tell a computer in advance to do so in a stored manner.
This process isn’t perfect but can be done on large or small inventories and asynchronously by multiple groups.
Step 1) Set everything in the set of items to be inventoried to trace if it’s current status is checked in.
Step 2) Setup carts with scanners and laptops. Ideally use two person to each cart. One moves materials while one scans. Scan into the item status screen using the barcode. Make sure that all item screens are setup the same to show barcodes. Turn up the volume on the error blurting noise for a misscanned / not barcoded item. Pull items that aren’t in the catalog. When done scanning save the file with information indicating branch and shelving location out to a delimited text file. All you really need to display is the barcode. If you’re willing to pass on finding items that shouldn’t even be there during the initial pass you don’t even need an Evergreen client. With some libraries having spotty wifi being able to just scan into notepad was really nice. It also made it more reliable. I found that the Evergreen staff client would occasionally crash causing people to lose work.
If you do have a very high volume of errors in your collection you will want to do smaller chunks and not have groups work asynchronously. If this is the case you may want to display status to correct as you go, especially checking in checked out and lost items. I don’t recommend correcting checked out items by batch as there are a lot of potential variables that impact on customer service unless you’re doing something like blanket wiping out associated charges in their favor. You may also want to show shelving location and branch to resolve. An item’s call number will usually show you where something is out of place but not always, especially if it’s the wrong branch or material from another library.
Step 3) Now, combine text files. This same functionality can be done in buckets but I found relying on buckets was too slow for doing large updates. If you can do it via SQL having it in text files is convenient. However, to avoid losing work with the staff client this meant a lot of small text files. I would sort them out by shelving locations into different folders and then combine them. These commands work on Mac and Unix and Windows with Cygwin I believe.
ls > filestocat.txt
xargs < filestocat.txt cat > barcodes.txt
Essentially you’re making a list of everything in a directory and then using that as a list to combine the files in one big data file.
In a perfect world this step is done. However, in the real world invariably folks won’t follow some part of step 2 correctly (another reason for baked in functionality). So, the list will probably need to be brought into Excel (or other tool for working with delimited data) and have mismatched columns corrected for.
Step 4) Do reports and corrections. This is the point at which you can get fancy or keep it simple. Reports should be used to find things out of position. You can do this manually or even have staff go through a list and find them. If there are a huge number out of order you may be better off just doing a shelf reading. At a minimum run an update statement to move anything in the list currently marked as trace to checked in. You want to check for the current status of trace in case it was checked out in the interim. You may want to run a list of the trace items for staff to look for. You may want to do updates to correct for branch and shelving location in case those are wrong. You may want to batch delete everything still listed as trace. Whether or not you want to do a second inventory pass will depend on how many exceptions you found.
Step 5) Rest and bake in the relaxed feeling of a perfectly shelved collection. This usually lasts in minutes rather than hours. 🙂
None of these steps are sacrosanct. Each organization will probably adjust them. The needs of your organization will determine much of that. But all of this has a lot of steps that are repeatable tasks that computers can do better but right now we have to manually manage. However, instead of just adjusting a few preferences or org unit settings right now we have to adjust the work flow and documentation significantly for each change and trust to humans to be accurate and precise each time instead of letting the computer do the work for us.