You're reminding me of a book called _The View from Nowhere_. I didn't get much out of the book, but the title haunts me as an angle on what some modern people, especially rationalists, are trying to do.
In particular, does it make sense for humans (as I think some rationalists do) to say that it doesn't matter if the human race is destroyed if something better (for who?) replaces it?
On the question of whether a data structures and algorithms class is neutral - it just clearly isn't "neutral", when you only have 15 weeks (or 10 weeks at a quarter system university) and can't teach *every* data structure and algorithm that people talk about. I would bet there's actually a lot of internal debate in computer science faculties these days about whether maybe the curriculum of that class should be revised to include a week about neural nets and backpropagation, or whether that is better to hold off to another class! There were hard-won debates decades ago that got NP-hardness included, but not every topic that someone thought was interesting.
Also many of the points you were making in the second half were closely related to a paper I was reading earlier this afternoon: https://philarchive.org/rec/NGUTIS
Nguyen points out that we need expert judgment to make the best decisions we can; but experts are people too and sometimes make decisions on behalf of their own interests rather than ours; so we ask for transparency, where experts explain their judgments; but we can't properly understand expert's explanations of their judgments. When we ask for transparency, we inevitably either require experts to falsify the explanation for their judgment, or to limit their judgments to ones that can be justified to non-experts, and either way there is a serious loss.
He doesn't take this next step, but replacing experts with protocols doesn't get us out of the problem - this is precisely the problem of explainable AI: our best neural nets for classifying things are often unexplainable; but these best neural nets are often biased in some way (not necessarily based on their *interests*, perhaps just based on biases in their training data); so we ask for explainability; but insisting that they only use explainable methods limits the methods we have access to.
Love the article! I am always trying to think up new ideas. One I’m currently trying to figure out what to do with is a mean-fixed market share cap concept. The idea is that, provably, the amount an individual or org can contribute to grow a market (innovate) is always smaller than the maximum possible market damages an individual/org can do (which in fact can exceed the current market capacity entirely.) Thus, the amount that first-come-first-serve natural monopolies are guaranteed market output should only be equivalent to a certain amount of market damage by the same monopoly, after which divestment/dispersement of the monopoly is required.
TLDR we wouldn’t need individuals to fight every single antitrust fight/any politics at all if we setup an equation where a company’s market % x the scale of the market must never exceed a fixed % of (population x average income of bottom 90% of population).
(I need to figure out some models to get the best fixed percentage, or even scrap this equation entirely for a similar one).
The foundational issue that you've omitted is that humans are animals. Animals compete for habitat and resources. It's in our DNA to be disposed to fear & anger, when "things" are not perceived to be going "well". The perception of threat, is functionally eqivalent to an actual threat. Blaming the "other" (anyone not in our group) is the foundational premise of all who would topple the current order, to obtain power. Your discussion is intellectually impressive. I see no errors in your reasoning. If, in the raising of our children, we do not counter our predispositions with critical thinking skills, we will never have an educated electorate. Trump isn't the problem. Hitler wasn't the problem. Their followers were the problem.
To me zen Buddhism is closer to universally neutral. But it doesn't matter because even Buddhism can be twisted into oppression , control and even war mechanism( see Myanmar ).
And Neutrality is stagnation. Maybe even death in absolute. Total zero. Stasis.
Everything comes and everything goes.. there is pain and torture, death and destruction. But also construction and creation, love and beauty. birth begets death and on ashes of old new grows. This is universal, this is eternal. And from certain point of view "neutral" and just
decades ago traveling to different continents i found cassettes with local music, and found local music cassettes with different immigrant communities in US. the portion of world styles of music let alone individual songs or bands that was available in the US was quite tiny. now there is a bit more on the internet, but still i would guess fairly small percent. the problem of algorithms to elicit and make available information is pressing on us and social media, but it seems daunting. the market signal of harvesting data for selling things may be an important operation of neutrality in the sense of undermining the grip of this or that parochial control urge. the contention of state regulation vs social media developed algorithm is a potentially healthy balance of considerations (and potentially dysfunctional), especially if multiple large states are participants in the quest for standards. historical knowledge seems strongly separated into 2 sets euro/central asian/indian and chinese that are not yet mutually translated but technology and effort will probably soon do so, which i think would cause a boost in several social sciences. the more cases of anything that are compared, the more the difference becomes apparent between function (things that are similar in the cases) and terminology/style/inconsequential local nuance (things that are unique in each case). neutrality can be a very harmful filter (i.e. its not actually neutral toward something important). the increase of governance, of practitioners in fields developing controls based on 'self evident benefits' in some contexts filters out and suppresses ideas and exploration that dont fit in the self evident ideological box. E.g. the recent western 'democratic right of victims' paradigm. the expansion of available information, knowledge, and opinion might be revealing some incapacity of the european separation of church and state. in the sense that neutrality toward ethics or the human journey cannot remain a viable paradigm impacting many institutions and practices, when traditional religion participation has fallen and is no longer doing the heavy lift of socializing youth and young families toward stable emotions and productivity. in 'radical reform' tariq ramadan recommended ethic councils be established in each knowledge domain comprised of religious and non religious experts of the domain that would research and recommend controls or guides for the ethic aspect of emerging technology. some plural of neutral might help the algorithm dilemma.
You're reminding me of a book called _The View from Nowhere_. I didn't get much out of the book, but the title haunts me as an angle on what some modern people, especially rationalists, are trying to do.
In particular, does it make sense for humans (as I think some rationalists do) to say that it doesn't matter if the human race is destroyed if something better (for who?) replaces it?
On the question of whether a data structures and algorithms class is neutral - it just clearly isn't "neutral", when you only have 15 weeks (or 10 weeks at a quarter system university) and can't teach *every* data structure and algorithm that people talk about. I would bet there's actually a lot of internal debate in computer science faculties these days about whether maybe the curriculum of that class should be revised to include a week about neural nets and backpropagation, or whether that is better to hold off to another class! There were hard-won debates decades ago that got NP-hardness included, but not every topic that someone thought was interesting.
Also many of the points you were making in the second half were closely related to a paper I was reading earlier this afternoon: https://philarchive.org/rec/NGUTIS
Nguyen points out that we need expert judgment to make the best decisions we can; but experts are people too and sometimes make decisions on behalf of their own interests rather than ours; so we ask for transparency, where experts explain their judgments; but we can't properly understand expert's explanations of their judgments. When we ask for transparency, we inevitably either require experts to falsify the explanation for their judgment, or to limit their judgments to ones that can be justified to non-experts, and either way there is a serious loss.
He doesn't take this next step, but replacing experts with protocols doesn't get us out of the problem - this is precisely the problem of explainable AI: our best neural nets for classifying things are often unexplainable; but these best neural nets are often biased in some way (not necessarily based on their *interests*, perhaps just based on biases in their training data); so we ask for explainability; but insisting that they only use explainable methods limits the methods we have access to.
Love the article! I am always trying to think up new ideas. One I’m currently trying to figure out what to do with is a mean-fixed market share cap concept. The idea is that, provably, the amount an individual or org can contribute to grow a market (innovate) is always smaller than the maximum possible market damages an individual/org can do (which in fact can exceed the current market capacity entirely.) Thus, the amount that first-come-first-serve natural monopolies are guaranteed market output should only be equivalent to a certain amount of market damage by the same monopoly, after which divestment/dispersement of the monopoly is required.
TLDR we wouldn’t need individuals to fight every single antitrust fight/any politics at all if we setup an equation where a company’s market % x the scale of the market must never exceed a fixed % of (population x average income of bottom 90% of population).
(I need to figure out some models to get the best fixed percentage, or even scrap this equation entirely for a similar one).
The foundational issue that you've omitted is that humans are animals. Animals compete for habitat and resources. It's in our DNA to be disposed to fear & anger, when "things" are not perceived to be going "well". The perception of threat, is functionally eqivalent to an actual threat. Blaming the "other" (anyone not in our group) is the foundational premise of all who would topple the current order, to obtain power. Your discussion is intellectually impressive. I see no errors in your reasoning. If, in the raising of our children, we do not counter our predispositions with critical thinking skills, we will never have an educated electorate. Trump isn't the problem. Hitler wasn't the problem. Their followers were the problem.
To me zen Buddhism is closer to universally neutral. But it doesn't matter because even Buddhism can be twisted into oppression , control and even war mechanism( see Myanmar ).
And Neutrality is stagnation. Maybe even death in absolute. Total zero. Stasis.
Everything comes and everything goes.. there is pain and torture, death and destruction. But also construction and creation, love and beauty. birth begets death and on ashes of old new grows. This is universal, this is eternal. And from certain point of view "neutral" and just
decades ago traveling to different continents i found cassettes with local music, and found local music cassettes with different immigrant communities in US. the portion of world styles of music let alone individual songs or bands that was available in the US was quite tiny. now there is a bit more on the internet, but still i would guess fairly small percent. the problem of algorithms to elicit and make available information is pressing on us and social media, but it seems daunting. the market signal of harvesting data for selling things may be an important operation of neutrality in the sense of undermining the grip of this or that parochial control urge. the contention of state regulation vs social media developed algorithm is a potentially healthy balance of considerations (and potentially dysfunctional), especially if multiple large states are participants in the quest for standards. historical knowledge seems strongly separated into 2 sets euro/central asian/indian and chinese that are not yet mutually translated but technology and effort will probably soon do so, which i think would cause a boost in several social sciences. the more cases of anything that are compared, the more the difference becomes apparent between function (things that are similar in the cases) and terminology/style/inconsequential local nuance (things that are unique in each case). neutrality can be a very harmful filter (i.e. its not actually neutral toward something important). the increase of governance, of practitioners in fields developing controls based on 'self evident benefits' in some contexts filters out and suppresses ideas and exploration that dont fit in the self evident ideological box. E.g. the recent western 'democratic right of victims' paradigm. the expansion of available information, knowledge, and opinion might be revealing some incapacity of the european separation of church and state. in the sense that neutrality toward ethics or the human journey cannot remain a viable paradigm impacting many institutions and practices, when traditional religion participation has fallen and is no longer doing the heavy lift of socializing youth and young families toward stable emotions and productivity. in 'radical reform' tariq ramadan recommended ethic councils be established in each knowledge domain comprised of religious and non religious experts of the domain that would research and recommend controls or guides for the ethic aspect of emerging technology. some plural of neutral might help the algorithm dilemma.
Is this not redescribing cognitive biases and heuristics?
how??