최신 Professional-Machine-Learning-Engineer 무료덤프 - Google Professional Machine Learning Engineer

You are building a real-time prediction engine that streams files which may contain Personally Identifiable Information (Pll) to Google Cloud. You want to use the Cloud Data Loss Prevention (DLP) API to scan the files. How should you ensure that the Pll is not accessible by unauthorized individuals?

정답: C
설명: (DumpTOP 회원만 볼 수 있음)
You have created multiple versions of an ML model and have imported them to Vertex AI Model Registry.
You want to perform A/B testing to identify the best-performing model using the simplest approach. What should you do?

정답: C
설명: (DumpTOP 회원만 볼 수 있음)
You are pre-training a large language model on Google Cloud. This model includes custom TensorFlow operations in the training loop Model training will use a large batch size, and you expect training to take several weeks You need to configure a training architecture that minimizes both training time and compute costs What should you do?

정답: C
설명: (DumpTOP 회원만 볼 수 있음)
You need to develop an image classification model by using a large dataset that contains labeled images in a Cloud Storage Bucket. What should you do?

정답: C
설명: (DumpTOP 회원만 볼 수 있음)
During batch training of a neural network, you notice that there is an oscillation in the loss. How should you adjust your model to ensure that it converges?

정답: D
설명: (DumpTOP 회원만 볼 수 있음)
You are a lead ML engineer at a retail company. You want to track and manage ML metadata in a centralized way so that your team can have reproducible experiments by generating artifacts. Which management solution should you recommend to your team?

정답: A
설명: (DumpTOP 회원만 볼 수 있음)
You recently joined an enterprise-scale company that has thousands of datasets. You know that there are accurate descriptions for each table in BigQuery, and you are searching for the proper BigQuery table to use for a model you are building on AI Platform. How should you find the data that you need?

정답: B
설명: (DumpTOP 회원만 볼 수 있음)
You trained a model on data stored in a Cloud Storage bucket. The model needs to be retrained frequently in Vertex AI Training using the latest data in the bucket. Data preprocessing is required prior to retraining. You want to build a simple and efficient near-real-time ML pipeline in Vertex AI that will preprocess the data when new data arrives in the bucket. What should you do?

정답: D
설명: (DumpTOP 회원만 볼 수 있음)
You have been tasked with deploying prototype code to production. The feature engineering code is in PySpark and runs on Dataproc Serverless. The model training is executed by using a Vertex Al custom training job. The two steps are not connected, and the model training must currently be run manually after the feature engineering step finishes. You need to create a scalable and maintainable production process that runs end-to-end and tracks the connections between steps. What should you do?

정답: D
설명: (DumpTOP 회원만 볼 수 있음)
You work for a public transportation company and need to build a model to estimate delay times for multiple transportation routes. Predictions are served directly to users in an app in real time. Because different seasons and population increases impact the data relevance, you will retrain the model every month. You want to follow Google-recommended best practices. How should you configure the end-to-end architecture of the predictive model?

정답: B
설명: (DumpTOP 회원만 볼 수 있음)
You manage a team of data scientists who use a cloud-based backend system to submit training jobs. This system has become very difficult to administer, and you want to use a managed service instead. The data scientists you work with use many different frameworks, including Keras, PyTorch, theano. Scikit-team, and custom libraries. What should you do?

정답: C
설명: (DumpTOP 회원만 볼 수 있음)
You work for a food product company. Your company's historical sales data is stored in BigQuery You need to use Vertex Al's custom training service to train multiple TensorFlow models that read the data from BigQuery and predict future sales You plan to implement a data preprocessing algorithm that performs min- max scaling and bucketing on a large number of features before you start experimenting with the models. You want to minimize preprocessing time, cost and development effort How should you configure this workflow?

정답: B
설명: (DumpTOP 회원만 볼 수 있음)
You are creating a deep neural network classification model using a dataset with categorical input values.
Certain columns have a cardinality greater than 10,000 unique values. How should you encode these categorical values as input into the model?

정답: C
설명: (DumpTOP 회원만 볼 수 있음)
You work for a large retailer and you need to build a model to predict customer churn. The company has a dataset of historical customer data, including customer demographics, purchase history, and website activity.
You need to create the model in BigQuery ML and thoroughly evaluate its performance. What should you do?

정답: D
설명: (DumpTOP 회원만 볼 수 있음)
You are building a model to predict daily temperatures. You split the data randomly and then transformed the training and test datasets. Temperature data for model training is uploaded hourly. During testing, your model performed with 97% accuracy; however, after deploying to production, the model's accuracy dropped to 66%.
How can you make your production model more accurate?

정답: B
설명: (DumpTOP 회원만 볼 수 있음)
You are an ML engineer at a manufacturing company. You need to build a model that identifies defects in products based on images of the product taken at the end of the assembly line. You want your model to preprocess the images with lower computation to quickly extract features of defects in products. Which approach should you use to build the model?

정답: C
설명: (DumpTOP 회원만 볼 수 있음)

우리와 연락하기

문의할 점이 있으시면 메일을 보내오세요. 12시간이내에 답장드리도록 하고 있습니다.

근무시간: ( UTC+9 ) 9:00-24:00
월요일~토요일

서포트: 바로 연락하기