Artificial intelligence (AI) systems are increasingly used to assist humans in making high-stakes decisions, such as online information curation, resume screening, mortgage lending, police surveillance, public resource allocation, and pretrial detention. While the hope is that the use of algorithms will improve societal outcomes and economic efficiency, concerns have been raised that algorithmic systems might inherit human biases from historical data, perpetuate discrimination against already vulnerable populations, and generally fail to embody a given community's important values. Recent work on algorithmic fairness has characterized the manner in which unfairness can arise at different steps along the development pipeline, produced dozens of quantitative notions of fairness, and provided methods for enforcing these notions. However, there is a significant gap between the over-simplified algorithmic objectives and the complications of real-world decision-making contexts. This project aims to close the gap by explicitly accounting for the context-specific fairness principles of actual stakeholders, their acceptable fairness-utility trade-offs, and the cognitive strengths and limitations of human decision-makers throughout the development and deployment of the algorithmic system.