Ok this is Professor Aaron Page. I don't think I adequately explained this system for the students, I am just learning it myself now. Anyway I look forward to today's discussion.
Zac's claim re fairness seems important. But not sure that gets us closer to how it would be achieved. Defining fairness in the contested/cacophonous public domain in which AI and other technological developments are being born is going to be difficult. And how to we conceptualize a sense or ethics or core of fairness that can be infinitely extrapolated and applied to circumstances and contexts far beyond what we now know. Because that is the challenge with real AI, beyond mere "automated decision-making." AI will continue to learn, will improve its own learning, will recursively layer new learning on top of new learning to approach real intelligence. My inclination is worry that it cannot be "limited" by rules of fairness, unless we find some fundamental core. But if it cannot be limited, is it different to think about it being "motivated" by affirmative, justice-seeking fairness considerations such that its pursuit of these ends would keep it largely on the "right" track? Isn't that how human themselves work? And is this perhaps where human rights concepts and strategies come in?
Finally the quick point that we know (I feel) so little about how the most relevant AI of our times is operating, learning, developing. Exactly how did it happen that Amazon's human resources software began to prefer male candidates? We need to open-source that problem for widespread public investigation and discussion. But we are so far away from being able to do that given proprietary assumptions and other entrenchments of law. One of the central principles of human rights is transparency, which strikes me as the most critical thing we need right now.
I think it is interesting in real life (ie. law school classroom) we have these Chatham House Rules, but they only work as well as the weakest member of the group. So in real life, we know who is in the group, even if you do not say who said the specific idea. However, online there is complete anonymity, so you do not even know who it is you are talking to and why they are speaking. How do we deal with the fact that this compounding anonymity has lead to terrible places like 8chan or Gab that have real-world consequences?
Doesn't the physical proximity of the person speaking force us to take them as human and take in their opinion?
To both questions, Nesson is describing the pseudo-anonyous space. Where we have a classroom, in which we meet in proximity and have the opportunity to build trust, and add the digital anonymous space to it. So we no longer are open to the open troll space, just the classroom space. But with the use of anonymity (for example by using Threads), we can explore our own free expression
How do we unite "truth" and "data"? Data is concrete, singular, and unchanging. However, truth is a much more abstract and amorphous concept. I think there can hardly be a "singular" standard of truth because, in my opinion, it's much more of a spectrum depending on the perspective you're taking.
@Green Leader Your question may speak to Aaron's question at the beginning of this Thread. We may not ever know for sure, or reach the "truest" self, but we can be motivated and get on the right track to pursuing knowledge of our true selves. Can that be an honorable task?
Although we are able to explore our truest self within the classroom, should we be confident that we are able to maintain that once we leave the classroom? Wouldn't we be challenged again with discovering new biases?