Virtual Box Image for Evergreen Redux

Long, long ago in a Hack-A-Way far, far away ... well actually it was just Michigan last month, Yamil Suarez and I talked about the challenges of getting folks to work on things like documentation and bugs when setting up Evergreen could be both very challenging and time consuming for them.  That kind of stuck in my mind and then I was reviewing some QA work done by Jason at ESI using this script originally developed by Bill Erickson, also at ESI.  I immediately liked the idea of building a Wheezy image with the host changes, git, xul runner 14, etc... preinstalled so that with git scripts you could build a new server with any changes for the configuration coming from the scripts.  I went this far as I'm pretty sure this will be useful to me at least.  So, now I wanted to share it with the community and see if they find it useful for the same purposes I imagine it will be.  If so I will look at doing tutorials for Virtual Box, this process and eventually git to help others.  

With this image and twelve lines of terminal commands and responding to 11 prompts (with things like 'yes', 'yes', 'evergreen', 'evergreen' and hitting enter seven times) you can get a fully functioning testable install of Evergreen.  [  Oh, and you might have to click on a couple of GUI elements like to open the terminal.  :)  ]

Here are the terminal commands:

./grab_script.sh      // [respond to a few prompts in the script]

su opensrf

osrf_control --start-all --localhost

exit 

/etc/init.d/apache2 start

su postgres

./grab_data.sh

exit

su opensrf

cd /openils/bin

./autogen.sh

exit

 

Hack-A-Way 2013 Day 2

day 2

I should have said this on the outset of yesterday's post - Hack-A-Way 2013 hosted by Calvin College and sponsored by Equinox Software.  I have no obligation to mention those in this forum but they both deserve the recognition (and far more). 

Priority one for day two was finding out how to hack hangouts so that my typing didn't mute the microphone (which they couldn't hear anyway since I was using an external microphone).  Some quick googling uncovered that this is a common complaint by people who use hangouts for collaboration and that there was a an undocumented tweak that only required minimal terminal comfort.  I'm still tempted to get a second laptop to make it easier to position the camera though and I'm definitely bringing the full tripod next time.  But, AV geekery behind me ...

We started with reports on the work the day before.  

Ben Shum reported work on mobile catalog.  That group was the largest of the working groups and had laid the ground work of saying that it should have full functionality of the TPAC and that was the goal.  The team worked on separate pieces and working on moving files into a collaborative branch on working repository.  A lot of the work is CSS copied from work done by Indiana as well as de-tabling interfaces and using DIVs.  

Our table worked on a proof of concept for a web based staff client.  Bill Erikson had previously done a dojo based patron search interface and checking out uncataloged items as a proof of concept.  We worked on fleshing that out, discussing platforms for responsive design, what would be needed for baseline functionality (patron search, checkout, see items out, renewals) and then later bills.  This is less a demo at this point than a proof of concept but one goal is to have something that might in a very limited way, with some caveats, also help those suffering from staff client memory leaks by having something that could handle checkouts without the staff client.  It is also bringing up a lot of conceptual questions about architecture of such a project.  Working directory and dev server are up.  Most of the work on this is being done by Bill and Jeff Godin with input from the rest of us.  

Lebbeous Fogle-Weekly reported for the the serials group.  They targeted some specific issues including how to handle special one off issues of an ongoing series and discussed the future direction of serials work.  In fact they already pushed some of their work to master.  However, because of their narrower focus they are going to break up 

Jason Stephenson worked on the new MARC export and has a working directory up.  New script is more configurable.  At this point I missed some of the conversation unfortunately due to some issues back home I had to deal with but apparently in a nod to Dan Scott MARC will now be MARQUE.  

In evaluating the 2.5 release process we spent a lot of time discussing the mostly good process and the big challenge the release manager had in it.  The community goal has been in making more stable releases.  During this release Dan Wells added more structure was good, the milestones, pointing out bugs was good but he also wanted feedback which was really hard for the developers who were very happy with his work.  But there are challenges and finding solutions is right now elusive.  Kathy Lussier addressed dig concerns about documentation and that ESI does a lot of the documentation work for new features but work not done by them is often left undone.  We had 380 commits since 2.4 with the biggest committers being Dan Wells, Ben Shum and Mike Rylander. Is that sustainable?  A rough guess i that those are half bugs and half features which is an improvement over the past.  Do we need to loosen review requirements?  Do we do leader boards as psychological incentive?  Concern that some would lower standards to increase numbers.  The decision about selecting a 2.6 release manager was put off as well deciding to let folks think about these issues more after we had a lot of discussion that lasted longer than we had planned.

Discussion also wandered into QA and automated testing.  A lot of progress has been made here since the conference.  In regards to unit testing there was a consensus that while it's a great idea it won't have a significant impact for a while.  Right now the tests are so minimal that they don't reflect the reality of what real data does in complex real world environments and it will take time of finding those issues and writing more tests to reflect that before the work has it's payoff.

Art.  Kinda looks like a grey alien to me.

Art.  Kinda looks like a grey alien to me.

I won't try to re-capture all of the conversation but maintaining quality and moving releases forward were discussed in great depth.  There was less interest in discussing 2.6 than really trying to clean up and make sure 2.5 is solid.  The decision about who would be the 2.6 release manager was put off and the idea proposed for a leader board to encourage bunch squashing.  A "whackin" day to targeting bugs like Koha does was also floated about.

I spent a lot of the day looking at some great instruction Yamil Saurez put together for installing OpenSRF and Evergreen on Debian for potential new users and chatting with Jeff and Lebbeous about the need for beefing up the concerto data set with new serials and UPC records.  Other projects included looking at the web site, starting a conversation about users, merchandising, IRC quotes, and so on.  

By the evening we had a nice dinner and a group of us headed out to Founders for a drink and to walk about downtown Grand Rapids in order to look at Art Prize installations which were quite nice.

 

Evergreen Hack-A-Way 2013 Day 1

Note: this is not a comprehensive report, just my notes from my memory.

I'm writing this as I eat a waffle at breakfast on day 2.  Day 0 was Monday and folks gathered up at the conference center for dinner but it was Tuesday that things really started.  Starting at breakfast everyone was immediately in work mode.  Talk was heavily on the future of the staff client and the other big issues that we've all been waiting to see others in person to hash out.  We wrapped up grub and headed as a group to the Calvin College library who is kindly hosting us in their conference facilities.  Power was at every table and coffee soon appeared.  The wifi wasn't perfect but we may have been pushing it's limit.  And those are really the three critical needs of this crowd - power, wifi and caffeine.  And I found myself in a bit of an AV geek role hosting the Google Hangout and coordinating things with IRC a little (and I certainly wasn't the only one multitasking back and forth so remote folks were involved).  

As we gathered (after laptops were setup) discussion immediately centered on the future of the staff client.  We discussed the issues with xulrunner.  Dan Scott noted that he had talked to Mozilla folks at a Google event and they were surprised at our use of xulrunner, noting that wasn't it's purpose.  Certainly newer versions cut off critical functionality and memory leaks are an ongoing concern.  With all of this in mind everyone was firmly in favor of moving forward somewhere.  

Ben, Kathy, Bill in the lobby Tuesday night.

Ben, Kathy, Bill in the lobby Tuesday night.

As we discussed where to go from xulrunner and what to go to, the discussion was web based client yeah or nay.  Although there were participants with a preference for a local staff client (specifically java based) the web based arguments took the day and those who had a preference for a local client were willing to support a web based client.  Discussion centered around using advanced Dojo and Chris Sharp worked on seeing how it would work with Evergreen 2.4 in the afternoon.  Everyone was concerned about the practical issues of how we could implement in stages a web based staff client, get testing and engagement as well with the community's limited resources.  The consensus was we needed to live with the staff client for a while but move away from xul within It, come up with standards for the new staff client as a draft and if we can move to modern dojo and then in staff client interfaces might be largely portable to a web based one.  Concerns about how to handle issues that can't be done in client and should they be done with a small local app versus plugin and best practices were discussed in regards to offline, staff client registration and printing along with various Windows OS concerns and authentication being maybe the trickest.  Offline with modern HTML5 was one of the lesser concerns.  Many words were also committed to how many and which browsers should be supported and although not absolute final answer was given most folks seem to agree upon supporting Chrome and Firefox by the community and that individual Evergreen members may support others.  

Collaborative notes were done by several parties and remote participation was good, both of which I was happy about.

After discussion we broke into groups looking at MARC export, web based staff client proof of concept, serials and mobile OPAC.  In between talking about the staff client I worked on some merchandising and web site issues for Evergreen (as well as handling some SCLENDS and York issues as they popped up).  

We worked until we were getting fairly punchy and broken to freshen up and head to dinner.  I ended up a great Thai place with good spicy food.  After that we ended up doing what I called the Hack-A-Way Lobby version with eight of us until I ran out of steam at 11:30.  Today is a new day.

Sound and Fury

Well, I got home from a road trip to find my comp copies of the July/August Computers in Libraries waiting for me and some emails!  I sat down to re-read it because frankly I wrote it long enough that I don't remember much of what I wrote.  

http://www.infotoday.com/cilmag/jul13/index.shtml

The article is about open source, including Evergreen, and selecting an ILS.  A few bit things:

1) They gave it a nice attractive spread.  That's vanity on my part but I like it. 

Front spread of the article.  

Front spread of the article.  

2) I'm still happy with my opening paragraph.  "Few decisions cause a library director to fret more than choosing a new integrated library system (ILS).  No matter what you acquire, a new ILS is expensive in terms of money, staff, time and stress.  Additionally, the wrong choice can damage morale and having lasting consequences.  Sometimes it is easy to identify which ILS is wrong for you - the contract costs are too high or maybe the features that you need aren't present.  But, too often, selecting the right one is like going to a car dealership where everyone speaks in tongues and the price lists are encrypted."

3) They re-used an old bio bit for me from my days working at the State Library which is wrong.  I'm at the York County Library System now.

Now, for the email I got and my response:  

From Greg, full name withheld to protect the guilty :)  :

"I just received my copy of the publication "Computers In Libraries", July/August 2013. I thought your article "Sound and Fury" was an excellent guide for libraries considering a migration of their library systems, but I was a bit surprised that you cited "LibLime Koha and Evergreen" as examples of open source ILSs. I rather suspect that many open source people would regard LibLime Koha as open source only by the letter of the law, and not by spirit or community. Evergreen is indeed an excellent example of open source software, but I wonder if it suffers by its apparent close association in this context with LibLime Koha.
Koha (!= LibLime Koha) is a much more openly developed and community supported example of an open source application than the LibLime fork. Your article deals very well with the subject of selecting vendors; the paid-support page for Koha (
http://koha-community.org/support/paid-support
) lists 37 vendors world-wide (if my quick count is correct and deducting two entries for PTFS). I'm under the impression that only PTFS supports LibLime Koha, but perhaps there are others. Many of the listed Koha service providers provide hosted application (ASP) solutions as you mentioned in your article. 
A quick count of my Koha mailing list messages for July 24-31 shows 86 entries (sorry, I got tired of counting after going backwards for one week), that probably extrapolates to about 350 messages per month. I don't follow the free support for LibLime, but I've been told that it's more questions than meaningful answers. Link of possible interest: https://listserv.nd.edu/cgi-bin/wa?A2=ind1308&L=web4lib&D=0&P=15401
Code contributions to the Koha development process are encouraged, with contributions and downloads available on a git code management system, and packages are available for Debian-based operating systems. Koha also has an IRC channel where developers discuss issues, and where users can <mostly> ask questions and get answers to problems they are experiencing. I'm not aware that LibLime Koha is as openly developed or freely supported. 
Again, I thought your article was excellent, but have misgivings about your citation of LibLime Koha instead of Koha as an example of open source software."

My ill thought out but honest response:

"Hi Greg,
I appreciate the feedback.  Looking back at the article I'm a chagrined about that.  I admit I'm an outsider in the Koha community though I have a fondness for any open source library project.
Just last week got a chance to chat at length with a gentleman [name and association redacted to protect those who didn't give permission to be used].  He actually reached out to me because of an upcoming talk I'm doing. I was aware of some community conflict with LibLime but he gave me a lot of context of the Koha VS KOHA issues.  Suffice it to say that if I had known I would have mentioned Koha differently.  Technically what I said is correct but obviously doesn't address the serious community concerns there and looking at community is central to the issue I wanted to discuss.  
Maybe on some level it's best to not have written about that there.  It really is an issue that deserves discussion in more depth.  I've thrown out the idea to the editors of CiL of doing an all open source issue (it's been about four years since they've done one).  If that happens I would love to work with someone to write about the Koha community issues in more depth. Still, whether it was the place for it or not I think I would have written that bit a wee bit differently.  I'm always glad to get opportunities to trigger discussion (even if the price I pay is putting my foot in my mouth occasionally).   "

 

Looking back and getting to read my article again it doesn't really detract from it.  It's just a quick reference at the beginning but I do regret it and feel that I should write something about communities in open source projects as a follow up which makes me start thinking about projects beyond Koha and Evergreen, failed and successful to look at.

Inventories With Evergreen

Recently I've gotten a lot of questions about doing inventories in Evergreen because I've proposed, and am looking for funding partners for, a full fledged inventory component to Evergreen.  I've heard folks complain about this missing for the past five years both inside and outside my consortium.  

Inventories themselves can be complicated or not but follow a fairly simple recipe:

1) Find out what you have.

2) Compare it to what you should have. 

3) Correct the exceptions. 

Within that you can have a lot of diversity.  What are your controls on what you have?  Libraries have items moving around all the time so there are a number of variables to control for.  And what level of correction do you perform?  This can range from the simple to very, very complicated.   Additionally, are you doing this in item buckets or via SQL?  That determines a lot of the scale that you can perform operations on.  What is your project management like and are there functions you want to have more segregated?  All of these are things I think baked in functionality will help with.  

Still, folks have managed to do their own inventories.   

Indiana has a great detailed writeup of their process here:   

http://www.in.gov/library/files/How_to_do_an_Inventory_Using_Evergreen.pdf

There was a presentation also done at the 2012 conference but the site's gone now and I can't remember who it was:  http://evergreen-ils.org/dokuwiki/doku.php?id=conference:2012

However, I get a lot of questions to the effect of "that's all nice and good and the development would be nice but I need to do an inventory now and we need something simple."  So, here is the barebones I process I setup for an SCLENDS member library and how I helped a Windows based admin set it up via SQL.  The admin can do it without SQL using smaller batches in buckets but there are several steps that are more difficult.  

 Mind you, that where this can get most complicated is in correcting for human error, the kind of thing that when it's built into Evergreen we can have the computer do more of for us since at this point we have to manually tell tools to do these checks each time rather than have had a programmer tell a computer in advance to do so in a stored manner.  

This process isn't perfect but can be done on large or small inventories and asynchronously by multiple groups. 

Step 1) Set everything in the set of items to be inventoried to trace if it's current status is checked in.  

Step 2) Setup carts with scanners and laptops.  Ideally use two person to each cart.  One moves materials while one scans.  Scan into the item status screen using the barcode.  Make sure that all item screens are setup the same to show barcodes.  Turn up the volume on the error blurting noise for a misscanned / not barcoded item.  Pull items that aren't in the catalog.  When done scanning save the file with information indicating branch and shelving location out to a delimited text file.  All you really need to display is the barcode.  If you're willing to pass on finding items that shouldn't even be there during the initial pass you don't even need an Evergreen client.  With some libraries having spotty wifi being able to just scan into notepad was really nice.  It also made it more reliable.  I found that the Evergreen staff client would occasionally crash causing people to lose work.

If you do have a very high volume of errors in your collection you will want to do smaller chunks and not have groups work asynchronously.  If this is the case you may want to display status to correct as you go, especially checking in checked out and lost items.  I don't recommend correcting checked out items by batch as there are a lot of potential variables that impact on customer service unless you're doing something like blanket wiping out associated charges in their favor.  You may also want to show shelving location and branch to resolve.  An item's call number will usually show you where something is out of place but not always, especially if it's the wrong branch or material from another library.   

Step 3) Now, combine text files. This same functionality can be done in buckets but I found relying on buckets was too slow for doing large updates.  If you can do it via SQL having it in text files is convenient.  However, to avoid losing work with the staff client this meant a lot of small text files.  I would sort them out by shelving locations into different folders and then combine them.  These commands work on Mac and Unix and Windows with Cygwin I believe.  

ls > filestocat.txt  

xargs < filestocat.txt cat > barcodes.txt

Essentially you're making a list of everything in a directory and then using that as a list to combine the files in one big data file. 

In a perfect world this step is done.  However, in the real world invariably folks won't follow some part of step 2 correctly (another reason for baked in functionality).  So, the list will probably need to be brought into Excel (or other tool for working with delimited data) and have mismatched columns corrected for.  

Step 4) Do reports and corrections.  This is the point at which you can get fancy or keep it simple.  Reports should be used to find things out of position.  You can do this manually or even have staff go through a list and find them.  If there are a huge number out of order you may be better off just doing a shelf reading.  At a minimum run an update statement to move anything in the list currently marked as trace to checked in.  You want to check for the current status of trace in case it was checked out in the interim.  You may want to run a list of the trace items for staff to look for.  You may want to do updates to correct for branch and shelving location in case those are wrong.  You may want to batch delete everything still listed as trace.  Whether or not you want to do a second inventory pass will depend on how many exceptions you found.  

Step 5) Rest and bake in the relaxed feeling of a perfectly shelved collection.  This usually lasts in minutes rather than hours.  :)

----------------------------------- 

None of these steps are sacrosanct.  Each organization will probably adjust them.  The needs of your organization will determine much of that.  But all of this has a lot of steps that are repeatable tasks that computers can do better but right now we have to manually manage.  However, instead of just adjusting a few preferences or org unit settings right now we have to adjust the work flow and documentation significantly for each change and trust to humans to be accurate and precise each time instead of letting the computer do the work for us.

 

 

The "other" Open Source ILS

So ... apparently I managed to upset some people.  I recently wrote an email about the registration for the Evergreen 2013 Hack-A-Way.  You can see the whole thing here: 

http://evergreen-ils.org/blog/?p=1299

Kathy Lussier kindly posted it for me but I wrote it.  The part that apparently upset some people was this:  

 supporting that “other” open source ILS, Koha.

Unfortunately there are those I interact with who don't know of Koha and do think of it as the "other" open source ILS.  This is unfortunate in my mind and when I encounter that I try to correct it.  I'm sure there are those in the Koha community who are also befuddled when Evergreen is mentioned.

For my own part, I've been involved in open source for about twenty years.  I heard about Koha before I had Evergreen.  I'm glad we have two big healthy communities.  They have shared code.  I have pointed out both publicly and privately ways that the Evergreen community could learn from Koha as recently as today.  I have projects I hope occur in the future that I see as ways to benefit both communities and both codebases.  I'm not an active member of the Koha community myself but I think of them as siblings, or at least cousins.  In my geographic neck of the woods I'm often called on for recommendations for ILSes. I often recommend Koha where I think it's a better fit than Evergreen.  Between libraries and open source we can't be that distant on the family tree after all.  

I put quotes about the word "other" because I wanted to draw attention to it.  The email was posted for the Evergreen community and to many Koha is the other open source ILS and I dislike that because I see it as dismissive.  Perhaps this is an indirect way of saying that I hoped by drawing attention to a common perception I wanted people to question it.    

I may have meant a little tongue in cheek humor and perhaps it was unwise.  Humor doesn't always come across well, much less that it's meant as flattering not dismissive.  I apologize for that.  It may seem an odd place for that during an announcement about a hacker's getaway but there are not many opportunities to expose the Evergreen community to awareness of Koha and frankly - I think we would benefit from that.

 

 

 

Mobile Registration Client

In a nutshell: a mobile registration app for Evergreen (initially imagined for Android though an iOS version would be cool).  The features list includes a password screen to protect the app, and the dual ability to edit accounts on the fly.  To be clear this is not a project that SCLENDS or York County has decided to take up but I was asked to imagine what such an application could do.  I’m trying to balance the original inspiration with robust functionality but I have to admit, once you go this far it’s tempting to add features like checkin or checkout though that might also be over reaching.  I’m posting this purely in case anyone is interested.  Ultimately this is a pretty simple app and has a lot of overlap with web based solutions that others have already done.

Use cases that drove these decisions:  I manage circulation and outreach departments.  With circulation we have frequent issues with lines backing up because patron registration takes a long time compared to normal checkouts.  This is made worse by things like summer reading program hikes in activity.  We could send people elsewhere to register but they still need a lot of assistance and the layout of our building is ... well, designed for a different age of the world.  So, mobility is better but not crucial.  However, mobility is all-important for outreach.  My outreach staff go out to events like festivals, walk lines and the ability to deliver services like register patrons with a table and a mobile hotspot are extremely attractive.  Finally, I can buy a decent Android tablet event with a pen and security devices substantially less than a desktop and usb tablet.

Requirement: We have to be able to store signatures.  This is a hold up for many library boards and in fact would be nice in Evergreen itself (upon demand).  

Proposal: To do this we add a column to the actor.usr table called signature.  This would handle the storage of the actual signature.  Alternatively we could work around and use the photograph field but that would interfere with anyone who uses that field for that function.  We might be able to save either binary data of the image there or the array for a bitmap or a mathematical value of the signature curve.  (This part of the idea was inspired by a lightning talk at the conference which nicely paralleled a lot of my own thinking.)

So, let’s break this down by workflow and screen:

When I run the app the screen I am greeted with will depend upon if this is the first time I have run the app.  If it is, then the first screen I see upon initial load of the app is the password screen. The purpose of the password is to prevent patrons from accessing the settings screen when they are using the app and therefore potentially messing up any settings.  Thereafter when the app is loaded it will go to the logon screen.

First time you load the app:

Settings Password Screen.png

Logon Screen, this is the default you see after the first time, the Settings Password Screen:

[image not loading]

Settings Screen:

Settings Screen.png

After logging in a registration screen is loaded.  The settings can be accessed from there and other screens.  There are actually two registration screens - one simple self-registration screen where staffs still have to intervene to finalize the record and a more full featured one meant for staff mobility. 

Self-Registration Screen:

Self Registration Screen.png

The self-registration screen is pretty straight forward but you will notice that it lacks a few critical things and makes some assumptions.  Required fields should be marked by some kind of symbol or background filled in color, which is based on what is required in the database.  Note that I’ve tried to make this reflect some core Evergreen conventions not be SCLENDS specific but I have moved away from the staff client registration screen in a few places where I think it makes more sense for this purpose.

When the record goes into the actor.usr from here it will leave the password blank or make it the last four digits of their PIN number if you’ve set that option.  

Juvenile is calculated based on birthdate.

Parent/Guardian should only be set by staff so it’s missing.

Internet Access level should only be set by staff so it’s missing.

Profile Permission group should only be set by staff so it’s missing but it will default based on settings.

Privilege Expiration date should only be set by staff so it’s missing but it will be default based on settings.

Barred and collections exempt default to null, not set.

Active defaults to FALSE, so that they have to have staff activate their account.

Claimed return, claimed never checked out, alert message are all null on self registration.

Hold behind the circ desk defaults to null.

Valid address and statistical categories are for staff interface only.

 The planned workflow included that the patron would hit submit at the end of entering their information and a new screen would come up that would present the actor.usr id, entered name, phone number and email address.  When the submit button was hit it inserted a new actor.usr record that was automatically marked as invalid and without a barcode.  The password would be left blank.  The staff member would use the name, id, phone number or email address to access the patron record that has now been inserted into the database.  Based on library procedure they might go directly to the record via the user id or they might search based on name or email to find inclusion in a group or merging if it is a duplicate.  Since it is a self-registration some options are restricted from the app user.  The user name will be left blank until staff adds the barcode.  At that same time the password can be added along with a corrected patron profile group.

Then the staff member marks the account valid and adds a barcode to give the patron their barcode.  Using the options screen there is an option to set the machine to staff mode and there would be a barcode scanner to immediately attach a barcode and to override some default behaviors of the self-registration.

The staff registration screen is quite different and gives direct access to everything.  It is also used when editing a record:

Staff Registration/Editing Screen: 

Staff Registration Screen.png

---------------------------------------

Android Registration Patron Editor App.png

Evergreen Easter Egg

It's a little early for Easter I know but if you pick up the December Computers in Libraries there is a little Evergreen Easter Egg.  

I took a few photos for my article on technology trends and in the one showing an iPad as a second screen to extend my laptop's display and input I'm running the Evergreen staff client directly from the iPad.