최신 Professional-Machine-Learning-Engineer 무료덤프 - Google Professional Machine Learning Engineer
You developed a Vertex Al pipeline that trains a classification model on data stored in a large BigQuery table.
The pipeline has four steps, where each step is created by a Python function that uses the KubeFlow v2 API The components have the following names:
You launch your Vertex Al pipeline as the following:
You perform many model iterations by adjusting the code and parameters of the training step.
You observe high costs associated with the development, particularly the data export and preprocessing steps.
You need to reduce model development costs.
What should you do?
The pipeline has four steps, where each step is created by a Python function that uses the KubeFlow v2 API The components have the following names:
You launch your Vertex Al pipeline as the following:
You perform many model iterations by adjusting the code and parameters of the training step.
You observe high costs associated with the development, particularly the data export and preprocessing steps.
You need to reduce model development costs.
What should you do?
정답: A
설명: (DumpTOP 회원만 볼 수 있음)
You are developing an ML pipeline using Vertex Al Pipelines. You want your pipeline to upload a new version of the XGBoost model to Vertex Al Model Registry and deploy it to Vertex Al End points for online inference. You want to use the simplest approach. What should you do?
정답: B
설명: (DumpTOP 회원만 볼 수 있음)
You are a data scientist at an industrial equipment manufacturing company. You are developing a regression model to estimate the power consumption in the company's manufacturing plants based on sensor data collected from all of the plants. The sensors collect tens of millions of records every day. You need to schedule daily training runs for your model that use all the data collected up to the current date. You want your model to scale smoothly and require minimal development work. What should you do?
정답: A
설명: (DumpTOP 회원만 볼 수 있음)
Your task is classify if a company logo is present on an image. You found out that 96% of a data does not include a logo. You are dealing with data imbalance problem. Which metric do you use to evaluate to model?
정답: B
설명: (DumpTOP 회원만 볼 수 있음)
You need to train a natural language model to perform text classification on product descriptions that contain millions of examples and 100,000 unique words. You want to preprocess the words individually so that they can be fed into a recurrent neural network. What should you do?
정답: A
설명: (DumpTOP 회원만 볼 수 있음)
You trained a text classification model. You have the following SignatureDefs:
What is the correct way to write the predict request?
What is the correct way to write the predict request?
정답: C
설명: (DumpTOP 회원만 볼 수 있음)
You created a model that uses BigQuery ML to perform linear regression. You need to retrain the model on the cumulative data collected every week. You want to minimize the development effort and the scheduling cost. What should you do?
정답: C
설명: (DumpTOP 회원만 볼 수 있음)
While running a model training pipeline on Vertex Al, you discover that the evaluation step is failing because of an out-of-memory error. You are currently using TensorFlow Model Analysis (TFMA) with a standard Evaluator TensorFlow Extended (TFX) pipeline component for the evaluation step. You want to stabilize the pipeline without downgrading the evaluation quality while minimizing infrastructure overhead. What should you do?
정답: A
설명: (DumpTOP 회원만 볼 수 있음)
You lead a data science team at a large international corporation. Most of the models your team trains are large-scale models using high-level TensorFlow APIs on AI Platform with GPUs. Your team usually takes a few weeks or months to iterate on a new version of a model. You were recently asked to review your team's spending. How should you reduce your Google Cloud compute costs without impacting the model's performance?
정답: D
설명: (DumpTOP 회원만 볼 수 있음)
You work for a large hotel chain and have been asked to assist the marketing team in gathering predictions for a targeted marketing strategy. You need to make predictions about user lifetime value (LTV) over the next 30 days so that marketing can be adjusted accordingly. The customer dataset is in BigQuery, and you are preparing the tabular data for training with AutoML Tables. This data has a time signal that is spread across multiple columns. How should you ensure that AutoML fits the best model to your data?
정답: B
설명: (DumpTOP 회원만 볼 수 있음)
You work for a company that is developing an application to help users with meal planning You want to use machine learning to scan a corpus of recipes and extract each ingredient (e g carrot, rice pasta) and each kitchen cookware (e.g. bowl, pot spoon) mentioned Each recipe is saved in an unstructured text file What should you do?
정답: C
설명: (DumpTOP 회원만 볼 수 있음)
You are pre-training a large language model on Google Cloud. This model includes custom TensorFlow operations in the training loop Model training will use a large batch size, and you expect training to take several weeks You need to configure a training architecture that minimizes both training time and compute costs What should you do?
정답: C
설명: (DumpTOP 회원만 볼 수 있음)