Before proceeding to the interview itself I would like to tell the story of how I met with Gerry Purdy. It is worth telling.
As the regular readers of my blog know, I am a fan of combined events and their scoring. I have worked on and off on scoring since my childhood (one of the first articles in the blog is telling that story), have developed scoring tables for finswimming, and in 2007 I published an article in New Studies in Athletics on the physical basis of scoring. Doing the bibliography for my article I became aware of the work of Purdy. He had, in 1972, presented a PhD thesis on scoring. An abridged version of the thesis was published in the form of three articles which appeared between 1974 and 1977 in the journal Medicine and Science in Sports.
At the beginning of 2021 I decided to publish a series of articles on the theories of scoring. I was familiar with the approach of D. Harder (which was the basis of my article in NSA) and I had an idea of the evolution of scoring in athletics, mainly from Zarnowski's writings. However I felt that, if I wished to do justice to a century's efforts to produce adequate scoring tables, I had to delve in the older writings. I remembered the writings of Purdy and tried to find the articles quoted in his bibliography. Curiously many of those were available but for some I drew a blank. However I am not easily discouraged. Purdy had defended his thesis in 1972, so we had to be of the same age give or take a few years and, most probably, he was still alive and kicking. I searched for his email, found it and wrote to him. Gerry wrote back the next day and the contact was established.
Concerning the old references this is what he wrote in his email:As for the older copies of reference articles, I do not have any of them. But, there is (slight) hope: When I finished my Ph.D. at Stanford, I put a copy of the thesis along with a box of reference articles into the Computer Science Department Library. That box may still exist so if you are interested in continuing your detective work, you can contact them and see if it still exists and, if so, to get a copy of the articles in the box mailed to you.
And here starts the next interesting part in my quest. I went to the web page of the Terman Engineering Library of Stanford University and I decided to contact the Librarian, Ms. Linnea Shieh. (I do not know why I chose Ms. Shieh. It was probably because she had the most welcoming smile).
I wrote to her and again I had an immediate response. She was able to locate the item Gerry was referring to, had it repatriated to the campus (from the archive storage) and had the papers I was looking for digitised. Moreover she did also scan the thesis of Dr. Purdy and thus this document (of which only two or three hardcopies did exist) is now safe. (I am greatly indebted to Ms. Shieh: without her precious help I would never be able to complete the "scoring theories" series). Once the contact with Gerry Purdy was established we started to have more technical discussions and, one thing leading to another, we decided that by combining our expertise there was something to be done in the domain of scoring. So now we are collaborating on an article that has been completed and submitetd for publication.
The "Theories of Scoring" series in the blog was put on hold in June 2021 in order to make way for the series "The long and arduous road of women to the Olympics". But as I was pointing out in that last post, a second season was in preparation. Articles will appear in the following weeks (months?) but I decided that, given the capital role played by Gerry Purdy, it would be interesting to launch this second season by an interview. I suggested this in an email of mine (or was it in a zoom session?), Gerry accepted, I sent him a list of questions and here you have the interview.
BG. What attracted you to T&F?
GP. This is an interesting question. When I was in high school (Northside High, Atlanta), I was active in sports. I played Little League baseball and JV basketball. I also enjoyed running cross country. But my PE teacher (Coach Arthur Armstrong) had everyone participate in a decathlon. He used it to identify promising athletes for his Track & Field team. I played Center Field in Little League and could throw the baseball farther than most others and definitely more accurately. I threw the javelin farther than anyone else in the decathlon so Coach Armstrong told me to strop running and start lifting weights. I got fourth in the State Championship in the Javelin in the 11th grade (as a Junior) and then won the Georgia State Championship as a Senior (1961). I also set the Atlanta City and State record at 184’ 10” (56.34 m). See below.
An interesting byproduct of all this was that I wondered how someone won the decathlon and got to look at the scoring tables. That was more of an awareness and not research or studying about it. BG. Have you been an athlete yourself?
GP. Really answered above, but in addition, I did road running races after college including 10K, Half and Full Marathons. My best marathon time was 3:23:00.
Gerry (right) at the Western Hemisphere Marathon - LA 1966
BG. Why did you decide to work on scoring tables?GP. While I was running with a friend (Jim Gardner), we were trying to figure out the appropriate pacing for a training run and looked at the Decathlon tables to get ratings up to the 1500m. I happened to plot the points vs. performance and noticed to my (shocking) surprise that the field events were regressive (sloped over) instead of progressive (sloped up).
I was so shocked that I called the head of the IAAF (John Holt) in London and explained the problem to him. I told him they all had to be progressive. He then said, “Oh really? Why don’t you fix it?” And as they say, the rest is history.
Once I had finished my work I did a presentation to the IAAF Technical Committee. The reaction of the committee members was very favorable. Unfortunately the functioning of the IAAF Technical Committee is of highly political nature and, since many countries tried to bid on being author of the next scoring table, it took 12 years for the IAAF Technical Committee to approve a new table. Mr. Holt confirmed back to me that my work was instrumental in creating the new table that would be progressive throughout and be based on my Ph.D. thesis at Stanford.
BG. How about RunningTrax?
GP. I tried a number of times to make a business out of my research but it was much harder than I thought. At first, I tried to build a Web app, but mobile was growing so fast that we stopped that and got a mobile developer to do a joint venture. We got the RunningTrax iOS app built, but didn’t have any funds to promote it so I tried to license the software to others who already had apps, but there was too much NIH (“Not Invented Here”) from others. And, then, I tried to license the app to the various road races since they generated a lot of money (25,000 runners x $30 entry fee is almost a million dollars). They all wanted the app but wanted me to build a custom version for just them (with their branding) plus they wanted me to go find a sponsor that would pay me plus them!
I eventually had to back away as I had to make a living and my hopes and dreams for making millions off of RunningTrax faded away (🙁).
BG. And the current HPM project?
GP. Back when I finished my thesis and Ph.D. from Stanford, I knew there was a LOT more work to be done. First, I had a ‘back of envelope’ math formula from someone in the Stanford Operations Research Dept. (I didn’t know one formula from another and I thought it was likely ‘good enough’). Second, I had a small amount of data in the order of a few hundred data points for each of the three data points needed to solve the non-linear equality with three unknowns. The high end was easy but I had to have a realistic mid-level set of performance so I just arbitrary assigned points the average Master’s performances to be the 500-point level. It was all a hodgepodge of stuff thrown together with some least-squares curve fitting software on the Stanford 3600/65 mainframe. Please, don’t share this with the folks at Stanford. They might withdraw my Ph.D.! (😊).
I believe we are going about this the right way now. We’re using percentiles for points along the curve (as Harder did) and good math (primarily to be developed by you) to bring the two together. I suspect we will NOT have to use least squares software. I am not positive, but I *think* that if we have many points along the percentile rankings (e.g. going at in .01% increments), we will then have the result of a differential formula that should provide a calculation of the points from performances and vice versa (just as Newton did when he created Calculus). Generating points from performances is always easy. It’s going the other way (points to performances) that’s been more challenging.
So, I created the Human performance Modeling (HMP) project with professor Jill McNill on the Biomechanics depertment as USC. The HPM project allows us to solve the points-performance relationship for running as well as a number of other events in which a symbiotic relationship exists. And then, the real benefit to the world is to provide guidance on how to train to perform well (‘personal best’) while minimizing the risk of injury. Our APIs and data should be able to then be picked up and utilized everywhere.
Here’s one good example: road races will be able to take our performance to points formulas and display the point score (likely to .001 precision) for every runner in a race. With the point score, clubs will be able to run virtual races comparing some who run 5K with others that run 10K, etc. With Big Data analysis, we can determine handicaps of one race against the ‘flat, good weather’ standard (much like the way golf courses are rated with the slope compared to par).
Here’s another: with the 100% velocity for all running events, we’ll be able to (first) estimate the training pace based on my past research of 87.5% to 92.5% velocity. You need it fast enough to achieve the training effect but not too fast to cause injury. With the HPM models and large data reference, we’ll be able to determine a much better estimate of % max velocity in which to train. But wait, there’s more! We can do much better when we utilize large data from runner’s training data (collected most commonly from devices like the Apple Watch).
Later, we can collect training data and outcome data for tens of thousands if not millions of runners. Professor Rand Wilcox of USC can help us sort through the data to refine the % velocity min to max that will achieve the best result without injury. We may find that the % velocity min-max is different as you go up the performance point levels. We may find that those who train via running perform best if they run alternate days with cross training (or perhaps not) or how much rest is necessary to prevent injury.
How would we get the training data? Simple, provide a training capture API that would run in the user’s smartphone and/or smartwatch. The user gives HPM permission to use their training data without identity.
And one final example: we can use the training data with race data to then correlate starting capability, workout data along the way and then outcome data (performance and point level) to be able to give data-driven estimates of what is a realistic goal for runners – all personalized to their level of ability. No more saying “I want to break 3 hours in the marathon” when 4 hours is the realistic goal. Goals will simply ‘fall out’ of the Big Data and eliminate guess work. That alone will be a gigantic move forward as it will help runners not over estimate their goals and, as a result, not end up injured as many do now because of over training.
I am going to assume that Jill, Harper and Christian (BG: members of the HPM working group) will come up with a number of other examples. Thus, the HPM could have a profound impact on the entire world of training to race activity – not only for running but also for cycling and swimming and, perhaps other events as well.
Thanks for asking me these interesting questions!