We demonstrate that it is possible to measure metallicity from the SDSS five-band photometry to better than 0.1 dex using supervised machine learning algorithms. Using spectroscopic estimates of metallicity as ground truth, we build, optimize and train several estimators to predict metallicity. We use the observed photometry, as well as derived quantities such as stellar mass and photometric redshift, as features, and we build two sample data sets at median redshifts of 0.103 and 0.218 and median r-band magnitude of 17.5 and 18.3 respectively. We find that ensemble methods, such as Random Forests of Trees and Extremely Randomized Trees, and Support Vector Machines all perform comparably well and can measure metallicity with a Root Mean Square Error (RMSE) of 0.081 and 0.090 for the two data sets when all objects are included. The fraction of outliers (objects for which the difference between true and predicted metallicity is larger than 0.2 dex) is only 2.2 and 3.9% respectively, and the RMSE decreases to 0.068 and 0.069 if those objects are excluded. Because of the ability of these algorithms to capture complex relationships between data and target, our technique performs better than previously proposed methods that sought to fit metallicity using an analytic fitting formula, and has 3x more constraining power than SED fitting-based methods. Additionally, this method is extremely forgiving of contamination in the training set, thus requiring minimal data cleaning, and is very flexible, particularly in regard to combining photometric data with other constraints (for example, measurements of emission line fluxes). We find that our technique can be used with very satisfactory results for training sample sizes of just a few hundred objects. All the routines to reproduce our results and apply them to other data sets are made available.
↧