Purpose of license templatization
Peter Williams <peter.williams@...>
During the discussion this morning regarding license templatization a
question came up regarding the exact purpose of templatization. This
question was not answered satisfactory so hopefully the full legal
group can answer it.
The use cases we have so far can be categorized as either ignoring
inconsequential variations (eg, white space differences, alternate
spellings, minor grammatical differences) or ignoring very common, and
well understood, material variations (eg, changes in the name of the
Support for specifying acceptable material changes seems necessary.
Without it several of the standardized licenses will be effectively
useless because they have organization names, etc embedded in them.
The bsd license is a prime example.
Standardizing approaches for ignoring inconsequential variations has
much lower value. It will be extremely difficult to do well and tools
can handle this problem without a standard. In fact, most tools
already have sophisticated techniques for recognizing licenses while
ignoring trivial variations. Those techniques are probably superior
to the rather basic normalization mechanisms we are going to be able
to specify. Tools are unlikely to adopt any approach suggested in the
spec because that would reduce the quality of their results.
Designing, testing and documenting even a relatively simple minded
english language normalization algorithm is non-trivial. (If we need
to support any other languages that will, of course, add to the level
of effort.) Much of the effort required to design and implement such
a normalization scheme will fall on people who are already critical
resources for the beta release of the spec.
We should seriously consider if a license normalization algorithm is
worth the cost. (Particularly with an eye to the opportunity costs.)
I don't think specifying how tools/people should deal with
inconsequential variations in license text is worth the effort. Tools
will quickly evolve, or more likely have already have evolved,
techniques equivalent or superior to anything we will specify.
If it does turn out that a standardized normalization mechanism is
required, it would be just as easy to implement post beta or in
version 2 as it is to implement it now.