John Hollinger wrote about the concept of "the mock draft guy" in each draft. He defined it as a guy that no respected NBA scout has ranked highly but his name keeps coming but because he was ranked high in an early mock draft and the mock draft community is full of group think. He said you would be surprised by how many owners and other front office types who are not serious evaluators get influenced by the mock drafts, resulting in blown draft choices. His candidate for that title the year he wrote this was Johny Davis from Wisconsin.
Of course, my favorite team, the Wizards, drafted him and he was a huge bust. Someone is going to take Ace Bailey too high. Hopefully, not the Wizards.
Another thought, from people being like "But he dropped 37 points!" on Reddit... Has anyone ever looked into the predictive power of game-to-game variability? I could see it going both ways... On the one hand, I'd prefer to draft a guy who has some amazing games and some duds and make a bet that he can develop some consistency. That seems like a safer bet than hoping that another prospect can raise his entire ceiling. OTOH, great games cause a stir, and that can lead to certain guys being overrated.
This is a good point. I believe Stephen Shea proposed this idea in one of his basketball analytics books. I would also guess that having the ability to have high highs without consistency bodes better than being consistently mediocre.
"And other draft analysts’ individual skill evaluations are more likely a product of first principles film evaluation while their rankings are more likely a product of groupthink."
This is a really interesting point.
Let me see if I can articulate this question... So let's assume that scouts are asked to create component models (good shooter? can he move his feet? etc) and then an ensemble model (given your evaluations of all these skills, what's your overall projection for this guy?). In baseball, my guess is that scouts got an unusually bad rap because they had a REALLY bad ensemble model (linear combination of 5 tools). But is it possible that their component models weren't so bad? I guess my question is: if you divide scouts into ensemble models and component models, where are their evaluations most useful? I agree that the ensemble is more susceptible to groupthink, but are there examples of players where scouts just intuited something that component models didn't catch? Are certain component models (e.g. eye-test of shooting stroke) more reliable than others? (e.g. defensive eval)
Yea that is a nice framework to explain the phenomenon that I have been trying to articulate.
But across sports you still need to be careful about which skillsets scouts are better/worse at evaluating. There can even be some skillsets where scout evaluation adds near-zero additional predictive power depending on data availability (e.g. in-game sprint speed for sports where in-game player tracking data is available).
On overall evaluations, I think scouts can still be useful "hypothesis generators," and pick up on context that might alter how much we should weigh or adjust one piece of info vs. another.
Ace is such a hard projection… I do think he can break out and get close to his upside in the right situation — maybe in Philly as a 4th option and with PG as his mentor bring out the best in him. On the other hand, his feel and shot selection are alarming, and his “elite skill” isn’t particularly efficient. Really good breakdown
Ace would be the funniest example of putting a player on a bad team that lets him do whatever the heck he wants
Terrible shot selection, probably terrible shooting splits, but everyone’s gonna be amazed at his highlight reel that the league puts out every other month
John Hollinger wrote about the concept of "the mock draft guy" in each draft. He defined it as a guy that no respected NBA scout has ranked highly but his name keeps coming but because he was ranked high in an early mock draft and the mock draft community is full of group think. He said you would be surprised by how many owners and other front office types who are not serious evaluators get influenced by the mock drafts, resulting in blown draft choices. His candidate for that title the year he wrote this was Johny Davis from Wisconsin.
Of course, my favorite team, the Wizards, drafted him and he was a huge bust. Someone is going to take Ace Bailey too high. Hopefully, not the Wizards.
Another thought, from people being like "But he dropped 37 points!" on Reddit... Has anyone ever looked into the predictive power of game-to-game variability? I could see it going both ways... On the one hand, I'd prefer to draft a guy who has some amazing games and some duds and make a bet that he can develop some consistency. That seems like a safer bet than hoping that another prospect can raise his entire ceiling. OTOH, great games cause a stir, and that can lead to certain guys being overrated.
This is a good point. I believe Stephen Shea proposed this idea in one of his basketball analytics books. I would also guess that having the ability to have high highs without consistency bodes better than being consistently mediocre.
"And other draft analysts’ individual skill evaluations are more likely a product of first principles film evaluation while their rankings are more likely a product of groupthink."
This is a really interesting point.
Let me see if I can articulate this question... So let's assume that scouts are asked to create component models (good shooter? can he move his feet? etc) and then an ensemble model (given your evaluations of all these skills, what's your overall projection for this guy?). In baseball, my guess is that scouts got an unusually bad rap because they had a REALLY bad ensemble model (linear combination of 5 tools). But is it possible that their component models weren't so bad? I guess my question is: if you divide scouts into ensemble models and component models, where are their evaluations most useful? I agree that the ensemble is more susceptible to groupthink, but are there examples of players where scouts just intuited something that component models didn't catch? Are certain component models (e.g. eye-test of shooting stroke) more reliable than others? (e.g. defensive eval)
Yea that is a nice framework to explain the phenomenon that I have been trying to articulate.
But across sports you still need to be careful about which skillsets scouts are better/worse at evaluating. There can even be some skillsets where scout evaluation adds near-zero additional predictive power depending on data availability (e.g. in-game sprint speed for sports where in-game player tracking data is available).
On overall evaluations, I think scouts can still be useful "hypothesis generators," and pick up on context that might alter how much we should weigh or adjust one piece of info vs. another.
https://open.substack.com/pub/maximumhoops/p/mock-draft-10?r=5tja6d&utm_medium=ios
This was awesome! Would love to hear your analysis on Vj Edgecombe, I'm hoping he grades out well!
Ace is such a hard projection… I do think he can break out and get close to his upside in the right situation — maybe in Philly as a 4th option and with PG as his mentor bring out the best in him. On the other hand, his feel and shot selection are alarming, and his “elite skill” isn’t particularly efficient. Really good breakdown
Ace would be the funniest example of putting a player on a bad team that lets him do whatever the heck he wants
Terrible shot selection, probably terrible shooting splits, but everyone’s gonna be amazed at his highlight reel that the league puts out every other month