This paper describes the development of a scalable process for people and machines working together to identify sections of text that reflect specific human values. A total of 2,005 sentences from 28 prepared testimonies presented before hearings on Net neutrality were manually annotated for one or more of ten human values using an annotation frame based on experience annotating similar content using the Schwartz Values Inventory. Moderately good agreement suggests that meaningful distinctions can often be drawn by human annotators. Several k-Nearest-Neighbor classifiers were compared in this preliminary study, yielding results that appear promising and that clearly point to productive directions for future work.
!!!All Science Journal Classification (ASJC) codes