How to set bit depth for MFT encode case?
Hardware-accelerated decoding with D3D11 and IMFTransform
Hi, I've been trying to get hardware-accelerated video decoding working with D3D11 and IMFTransform using this article:
https://docs.microsoft.com/en-us/windows/desktop/medfound/supporting-direct3d-11-video-decoding-in-media-foundation
I have two questions - first, I'm unable to create a Texture2D with the D3D11_BIND_DECODER flag, which is necessary to create the output view. Do you know why this might be the case?
I'd be happy to provide some code.
The interface IMFQualityAdvise of Microsoft H264 Video Decoder MFT not working
hi,i am trying to set the drop mode of the transform,but it‘s not working.code:
HRESULT hr1;
CComPtr<IMFQualityAdvise> quality_advise;
hr1 = _realTransform->QueryInterface(IID_IMFQualityAdvise, (void **)&quality_advise);
if (hr1 != S_OK)
{
VXError(L"hr1=" << hex << hr1);
}
else
{
hr1 = quality_advise->SetDropMode(MF_DROP_MODE_3);
VXDebug(L"hr1=" << hex << hr1);
hr1 = quality_advise->SetQualityLevel(MF_QUALITY_NORMAL_MINUS_3);
VXDebug(L"hr1=" << hex << hr1);
}
HRESULT returned was S_OK,but video playing seemed not effected.
Encoding a D3D Surface obtained through Desktop Duplication using Media Foundation
I want to encode Desktop Duplication API frames to send over the network after encoding them with Media Foundation. I'm stuck with a E_NOTIMPL error when I call IMFTransform::ProcessInput, leaving me a little in the dark.
These are the steps I've done up until now, I'm detailing them because it took me days to gather everything from the scarce scattered info across the web, so if it's resolved it will hopefully help others. Everything below is met with an S_OK:
- I'm obtaining the surface through Duplication API, creating an IMFSample from it using MFCreateVideoSampleFromSurface
- I'm getting a video encoder using IMFActivate::ActivateObject from an IMFActivate initialized with MFT_CATEGORY_VIDEO_ENCODER and MFVideoFormat_H264
- I'm initializing IMFMediaType on the input with bitrate, framerate, aspect ratio, etc.. and most importantly: MFVideoFormat_NV12, seems to be the only one to work with the DXGI_FORMAT_B8G8R8A8_UNORM of Desktop Duplication API.
- I'm setting IMFMediatype on the output with the same as the above, aside from MFVideoFormat_H264 as SUB_TYPE.
- I'm calling the IMFTransform::SetOutputType then IMFTransform::SetInputType with the 2 above.
- I'm setting the IMFSample time at 0 as it seems it's not being set by MFCreateVideaSampleFromSurface. Also setting the SampleDuration with MFFrameRateToAverageTimePerFrame with the input FPS.
After all of this, I call the MFTransform::ProcessInput with the IMFSample created above, and get a "E_NOTIMPL not implemented" as an HRESULT. I've read that I should set an IMFDXGIDeviceManager to my IMFTransform encoder above, so I did that using:
- MFCreateDXGIDeviceManager from my ID3D11Device used with Desktop Duplication API and an arbitrary reset token.
- Doing an IMFDXGIDeviceManager::ResetDevice with the device and token.
- Calling IMFTransform::ProcessMessage(MFT_MESSAGE_SET_D3D_MANAGER, reinterpret_cast(m_pDXDeviceManager).
On this last step I get another "E_NOTIMPL not implemented" on my HRESULT, and that's where I don't know what I'm doing wrong anymore or what needs to be done.
Having separate volume control over multiple media sessions
Hello, I have a class that loads an audio file (wav/mp3) and play them, the class has IMFSimpleAudioVolume member initialized with MFGetSErvice for example:
IMFSimpleAudioVolume* pVolume; MFGetService(pSink, MR_POLICY_VOLUME_SERVICE, IID_PPV_ARGS(&pVolume));
I won't post entry class here since it's huge, but basically the class is a media session that plays a single file.
I create multiple objects of this class, each object representing an audio file that plays in same time as other files, the problem is that when I change master volume in one object (for one file) via IMFSimpleAudioVolume::SetMasterVolume the master volume changes the volume of other objects (files) too!
I need to have each object their own volume level independent of each other, so that if I change volume in one object it does not touch volume of other object. but currently it does when I call IMFSimpleAudioVolume::SetMasterVolume
How do I achieve that? is it possible to have master volume separate for each media session? for example having a master volume and volume control for separate media sessions.
Thank you a lot, any advice is welcome!
where have examples to encode Desktop Duplication texture to h264 through media foundation?
where have examples to encode Desktop Duplication texture to h264 through media foundation?
Using Mpeg4/H.264 output with Sink Writer gives output with frames upside down
I've written a simple application that is based on the Sink Writer sample found here:
http://msdn.microsoft.com/en-us/library/windows/desktop/ff819477(v=vs.85).aspx
When using the default encoding setup, i.e. with encoding format set to MFVideoFormat_WMV3 and creating a writer based on 'wmv' file extension, I get an output as expected.
However, if I change the encoding to MFVideoFormat_H264 and create a sink writer based on a '*.mp4' file name, the output video has the frames flipped upside down. The output file looks otherwise to have been generated as a valid MPEG4/H.264 file.
EDIT: I get the same flipped effect if I change to MFVideoFormat_H264 in the original sample mentioned.
Why is this happening - are there any additional settings I need to set to get my MPEG4 frames appear non-flipped ?
Regards,
Leif
Media Foundation SinkWriter gives vertically flipped H264 encoded video
Hi
I am using Media Foundation SinkWriter to convert some bitmap images into H264 encoded video. I use the exact same example provided in the microsoft documentation link, where it encodes a static input buffer to video, i simply replaced that buffer with my bitmapdata.Scan0 buffer and my input type is RGB 24.
The output video is fine when the output media type is WMV1, WMV2 or WMV3 but gets vertically flipped when output type is H264.
Given below is the sample test application:
// MFTSinkWriter.cpp : Defines the entry point for the console application.//
#include "stdafx.h"
#include <Windows.h>
#include <mfapi.h>
#include <mfidl.h>
#include <Mfreadwrite.h>
#include <mferror.h>
#include <GdiPlus.h>
#include <iostream>
#include <sstream>
using namespace Gdiplus;
#pragma comment(lib, "Gdiplus.lib")
#pragma comment(lib, "mfreadwrite")
#pragma comment(lib, "mfplat")
#pragma comment(lib, "mfuuid")
template <class T> void SafeRelease(T **ppT)
{
if (*ppT)
{
(*ppT)->Release();
*ppT = NULL;
}
}
// Format constants
UINT32 VIDEO_WIDTH = 0;
UINT32 VIDEO_HEIGHT = 0;
UINT32 VIDEO_STRIDE = 0;
const GUID VIDEO_ENCODING_FORMAT = MFVideoFormat_H264;
const UINT32 VIDEO_BIT_RATE = 800000;
const UINT32 VIDEO_FPS = 30;
const GUID VIDEO_INPUT_FORMAT = MFVideoFormat_RGB24;
const UINT32 VIDEO_FRAME_COUNT = 20 * VIDEO_FPS;
const UINT64 VIDEO_FRAME_DURATION = 10 * 1000 * 1000 / VIDEO_FPS;
HRESULT InitializeSinkWriter(IMFSinkWriter **ppWriter, DWORD *pStreamIndex)
{
*ppWriter = NULL;
*pStreamIndex = NULL;
IMFSinkWriter *pSinkWriter = NULL;
IMFMediaType *pMediaTypeOut = NULL;
IMFMediaType *pMediaTypeIn = NULL;
DWORD streamIndex;
IMFAttributes *attributes;
MFCreateAttributes(&attributes, TRUE);
HRESULT hr = attributes->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE);
hr = attributes->SetUINT32(MF_SINK_WRITER_DISABLE_THROTTLING, TRUE);
hr = attributes->SetUINT32(MF_LOW_LATENCY, TRUE);
hr = attributes->SetGUID(MF_TRANSCODE_CONTAINERTYPE, MFTranscodeContainerType_MPEG4);
hr = MFCreateSinkWriterFromURL(L"output.wmv", NULL, attributes, &pSinkWriter);
// Set the output media type.
if (SUCCEEDED(hr))
{
hr = MFCreateMediaType(&pMediaTypeOut);
}
if (SUCCEEDED(hr))
{
hr = pMediaTypeOut->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
}
if (SUCCEEDED(hr))
{
hr = pMediaTypeOut->SetGUID(MF_MT_SUBTYPE, VIDEO_ENCODING_FORMAT);
}
if (SUCCEEDED(hr))
{
hr = pMediaTypeOut->SetUINT32(MF_MT_AVG_BITRATE, VIDEO_BIT_RATE);
}
if (SUCCEEDED(hr))
{
hr = pMediaTypeOut->SetUINT32(MF_MT_INTERLACE_MODE, 2);
}
if (SUCCEEDED(hr))
{
hr = MFSetAttributeSize(pMediaTypeOut, MF_MT_FRAME_SIZE, VIDEO_WIDTH, VIDEO_HEIGHT);
}
if (SUCCEEDED(hr))
{
hr = MFSetAttributeRatio(pMediaTypeOut, MF_MT_FRAME_RATE, VIDEO_FPS, 1);
}
if (SUCCEEDED(hr))
{
hr = MFSetAttributeRatio(pMediaTypeOut, MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
}
if (SUCCEEDED(hr))
{
hr = pSinkWriter->AddStream(pMediaTypeOut, &streamIndex);
}
// Set the input media type.
if (SUCCEEDED(hr))
{
hr = MFCreateMediaType(&pMediaTypeIn);
}
if (SUCCEEDED(hr))
{
hr = pMediaTypeIn->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
}
if (SUCCEEDED(hr))
{
hr = pMediaTypeIn->SetGUID(MF_MT_SUBTYPE, VIDEO_INPUT_FORMAT);
}
if (SUCCEEDED(hr))
{
hr = pMediaTypeIn->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive);
}
if (SUCCEEDED(hr))
{
hr = MFSetAttributeSize(pMediaTypeIn, MF_MT_FRAME_SIZE, VIDEO_WIDTH, VIDEO_HEIGHT);
}
if (SUCCEEDED(hr))
{
hr = MFSetAttributeRatio(pMediaTypeIn, MF_MT_FRAME_RATE, VIDEO_FPS, 1);
}
if (SUCCEEDED(hr))
{
hr = MFSetAttributeRatio(pMediaTypeIn, MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
}
if (SUCCEEDED(hr))
{
hr = pSinkWriter->SetInputMediaType(streamIndex, pMediaTypeIn, NULL);
}
// Tell the sink writer to start accepting data.
if (SUCCEEDED(hr))
{
hr = pSinkWriter->BeginWriting();
}
// Return the pointer to the caller.
if (SUCCEEDED(hr))
{
*ppWriter = pSinkWriter;
(*ppWriter)->AddRef();
*pStreamIndex = streamIndex;
}
SafeRelease(&pSinkWriter);
SafeRelease(&pMediaTypeOut);
SafeRelease(&pMediaTypeIn);
return hr;
}
HRESULT WriteFrame(
Bitmap *b2,
IMFSinkWriter *pWriter,
DWORD streamIndex,
const LONGLONG& rtStart, // Time stamp.
DWORD counter
)
{
BitmapData bmpData;
b2->LockBits(new Rect(0,0,b2->GetWidth(),b2->GetHeight()), ImageLockMode::ImageLockModeRead, b2->GetPixelFormat(), &bmpData);
IMFSample *pSample = NULL;
IMFMediaBuffer *pBuffer = NULL;
const LONG cbWidth = 4 * VIDEO_WIDTH;//bmpData.Stride;
const DWORD cbBuffer = cbWidth * VIDEO_HEIGHT;
BYTE *pData = NULL;
// Create a new memory buffer.
HRESULT hr = MFCreateMemoryBuffer(cbBuffer, &pBuffer);
// Lock the buffer and copy the video frame to the buffer.
if (SUCCEEDED(hr))
{
hr = pBuffer->Lock(&pData, NULL, NULL);
}
if (SUCCEEDED(hr))
{
hr = MFCopyImage(
pData, // Destination buffer.
bmpData.Stride, // Destination stride.
(BYTE*)bmpData.Scan0, // First row in source image.
bmpData.Stride, // Source stride.
cbWidth, // Image width in bytes.
VIDEO_HEIGHT // Image height in pixels.
);
}
if (pBuffer)
{
pBuffer->Unlock();
}
// Set the data length of the buffer.
if (SUCCEEDED(hr))
{
hr = pBuffer->SetCurrentLength(cbBuffer);
}
// Create a media sample and add the buffer to the sample.
if (SUCCEEDED(hr))
{
hr = MFCreateSample(&pSample);
}
if (SUCCEEDED(hr))
{
hr = pSample->AddBuffer(pBuffer);
}
// Set the time stamp and the duration.
if (SUCCEEDED(hr))
{
hr = pSample->SetSampleTime(rtStart);
}
if (SUCCEEDED(hr))
{
hr = pSample->SetSampleDuration(VIDEO_FRAME_DURATION);
}
// Send the sample to the Sink Writer.
if (SUCCEEDED(hr))
{
hr = pWriter->WriteSample(streamIndex, pSample);
}
SafeRelease(&pSample);
SafeRelease(&pBuffer);
// Unlock the bits.
b2->UnlockBits( &bmpData );
//delete b;
return hr;
}
void main()
{
GdiplusStartupInput gdiplusStartupInput;
ULONG_PTR gdiplusToken;
GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
Gdiplus::Bitmap * image = Gdiplus::Bitmap::FromFile(L"F:\\test.bmp");
VIDEO_WIDTH = image->GetWidth();
VIDEO_HEIGHT = image->GetHeight();
HRESULT hr = CoInitializeEx(NULL, COINIT_APARTMENTTHREADED);
if (SUCCEEDED(hr))
{
hr = MFStartup(MF_VERSION);
if (SUCCEEDED(hr))
{
IMFSinkWriter *pSinkWriter = NULL;
DWORD stream;
hr = InitializeSinkWriter(&pSinkWriter, &stream);
if (SUCCEEDED(hr))
{
// Send frames to the sink writer.
LONGLONG rtStart = 0;
for (DWORD i = 0; i < VIDEO_FRAME_COUNT; ++i)
{
hr = WriteFrame(image, pSinkWriter, stream, rtStart, i);
if (FAILED(hr))
{
break;
}
rtStart += VIDEO_FRAME_DURATION;
}
}
if (SUCCEEDED(hr))
{
hr = pSinkWriter->Finalize();
}
SafeRelease(&pSinkWriter);
MFShutdown();
}
CoUninitialize();
}
GdiplusShutdown(gdiplusToken);
}
Media Foundation record audio
How can I do to avoid the issue?
Linking issues (mfuuid+Cygwin)
I located most needed linking parts from dll files (mf.dll and mfplat.dll) but there are few things I didn't find:
_IID_IMFAsyncCallback
_MF_EVENT_TOPOLOGY_STATUS
_MF_TOPONODE_SOURCE
_MF_TOPONODE_PRESENTATION_DESCRIPTOR
_MF_TOPONODE_STREAM_DESCRIPTOR
_MFMediaType_Audio
_MFMediaType_Video
Most of that stuff links to mfuuid.lib but I can't use Microsoft .lib files with Cygwin (and there isn't mfuuid.dll file). So I need to know what .dll files have those so I can use dlltool to create Cygwin compatible libraries. I searched whole System32 for those strings but I didn't get any match.
If someone can help, I would be happy.
When using IMFMediaEngine, what is the best way to draw timed text like closed captions or subtitles
I'm using IMFMediaEngine to build a video player application, the basic playback works well with IMFMediaEngine but I'm in a trouble with displaying timed text(IMFTimedText) which is used for closed captions or subtitles.
From my investigation on displaying timed text with IMFMediaEngine is to use frame server mode and draw text by myself onto video frame which is obtained by IMFMediaEngine::TransferVideoFrame. As this approach doesn't seem to be handy so I would like to know if there is another way to display timed text easily.
Getting null IMFDXGIBuffer
I am new to media foundation and DirectX. I am capturing frames from webcam using IMFSourceReader. and getting the IMFSourceReaderCallback as expected. Now I want to render these frames on ID3D11Device that I have created using D3D11CreateDeviceAndSwapChain. Below is my code for that. In code below when 'CComQIPtr<IMFDXGIBuffer> dxgiBuffer(SourceMediaPtr);' gets executed I am getting dxgiBuffer as NULL. I tried to go through documentation and tutorials available but not able to figure out the error. Also once I get this buffer should I be using CopySubresourceRegion to render frame ?
//Method from IMFSourceReaderCallback
HRESULT Media::OnReadSample(HRESULT status, DWORD streamIndex, DWORD streamFlags, LONGLONG timeStamp, IMFSample *pSample)
{
HRESULT hr = S_OK;
DWORD NumBuffers = 0;
//RenderFrame();
EnterCriticalSection(&criticalSection);
do {
if (pSample == NULL)
break;
hr = pSample->GetBufferCount(&NumBuffers);
if (FAILED(hr) || NumBuffers < 1)
{
break;
}
IMFMediaBuffer* SourceMediaPtr = nullptr;
hr = pSample->GetBufferByIndex(0, &SourceMediaPtr);
if (FAILED(hr))
{
break;
}
if (SourceMediaPtr)
{
CComQIPtr<IMFDXGIBuffer> dxgiBuffer(SourceMediaPtr);
ID3D11Texture2D *pTexture = nullptr;
unsigned int subresource;
if (dxgiBuffer)
{
hr = dxgiBuffer->GetResource(__uuidof(ID3D11Texture2D), (LPVOID*)&pTexture);
if (pTexture)
{
dxgiBuffer->GetSubresourceIndex(&subresource);
D3D11_TEXTURE2D_DESC texDesc;
pTexture->GetDesc(&texDesc);
CComQIPtr<ID3D11Device> device;
CComQIPtr<ID3D11DeviceContext> context;
pTexture->GetDevice(&device);
device->GetImmediateContext(&context);
//context->CopySubresourceRegion(m_pSwapChain, 0, 0, 0, 0, pTexture.Get(), subresource, nullptr);
}
}
SafeRelease(&SourceMediaPtr);
}
} while (FALSE);
LeaveCriticalSection(&criticalSection);
hr = sourceReader->ReadSample((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, 0, NULL, NULL, NULL, NULL);
return hr;
}
regardsm
Surabhi
Is there any Audio Stream Routing feature at all in Media Foundation ?
Hi
I always wanted to know why the Audio Stream Routing feature, which is meantioned everywhere for Media Foundation in the MSDN, does not work at all.
It is meantioned in several places in the MSDN, but this is the most obvious one:
https://docs.microsoft.com/en-us/windows/win32/coreaudio/stream-routing
Quote:
"In Windows 7, an application can seamlessly transfer a stream from an existing default device to a new default audio endpoint. High-level audio API sets such as Media Foundation, DirectSound, and WAVE APIs implement the stream routing feature. Media applications that use these API sets to play or capture a stream from the default device use the default implementation and will not have to modify the application."
Now the problem is that it does not work at all. If you create an audio device source by calling MFCreateDeviceSource, and if you omit the endpoint ID parameter to create the source from a default device, then there is no audio stream routing. Now there are 2 questions here.
The first would be:
Is this is just another error in the documentation like the other 10 000 i discovered over the past 3-5 years ?
The second would be:
Is there a hidden attribute, that is not listed in Visual Studio and in the MSDN, that can be set to activate stream routing ?
->
The second question might be most interessting, as there was this hidden attribute for the "Video Processor MFT", which was neither in the MSDN documentation nor in any header available in Visual Studio. That attribute is a rather important one, and is only known through a Microsoft Sample Code. Does the audio device source in Media Foundation also have such a hidden attribute, or is this just an error in the documentation ?
As a side note: There seem to be several more issues with audio device sources created from the MFCreateDeviceSource function. One would be that the device removed event ( MECaptureAudioSessionDeviceRemoved ) is not send when the device was removed/lost. I found dozens of questions about it over at Stackoverflow and also here in the forum, which were never answered. And after testing the audio device source extensively myself i have to concur, that the event is never fired ( no matter what settings ).
It looks like the developement of the audio device source implementation was kinda put to a hold after Windows 7, and errors creeped in. It seems that the focus for Microsoft was UWP/WinRT from Windows 8 onwards, and that errors in existing code bases were never fixed. For example, i found around 100 hard bugs in the EVR which were never fixed, and it looks like the audio device source shared the same fate.
I hope a Microsoft dev can clear this up.
Regards,
Francis
Rendering Delay when using h264 Decoder
hi there,
i´m dealing with a custom media source that provides live video streams from network cameras. the IMFSample Timestamps are set to 0 so all frames are rendered immediately after their arrival. This works fine for Mjpg oder Mpeg4 streams.
However, when using h264, i have a delay of one second (rather more than less). The Frames are delivered using rtp, IMFSamples created with sample time 0 and delivered by the MediaStream immediately. But the frames are rendered delayed.
Are there any buffers within the h264 decoder MFT? If yes, is it possible to disable them? I know that the h264 decoder MFT is designed for a single, ultra-high-res movie, so this behavior makes sense, but being able to disable it would be quite handy for other scenarios.
So, is it possible to prevent this behavior for h264 or do i have to take is as it is ? (and therefore have to do the decoding somewhere else, in software, or maybe change my graphics card to some sort of nvidia or ati (as i heard they supply their own decoding MFTs)
i´m dealing with live surveillance scenarios where this delay is not acceptable.
thanks in advance, best regards
j.
Video Decoding with custom MFT or MSFT H.264 Decoder MFT
Hi,
I'm trying to decode an h.264 video stream in real-time, and I had 2 questions.
1) There used to be a delay with the MSFT H.264 Decoder MFT of about 1 second. Has this been resolved?
2) When I call IMFTransform::ProcessOutput, it works the first 8 times I call it, and after that it just hangs. What could be causing this?
Thanks!
Abhishek Bhargava
The output audio has been run faster when converting audio sample rate from 16Khz to 44.1 Khz using Micrsoft Media Foudation
I created a SinkWriter that is able to encode video and audio using Microsoft's Media Foundation Platform.
Video is working fine so far but I have some troubles with audio only.
I converted audio sample rate from 16Khz to 44.1 Khz, the ouput audio ran faster than the input audio.
The only bug that appears in the audio codec is AAC with input audio sample rate under 32kHZ.
What problem in Writesample function?
Given below is the sample test application:
#include <windows.h>
#include <windowsx.h>
#include <comdef.h>
#include <stdio.h>
#include <mfapi.h>
#include <mfidl.h>
#include <mfreadwrite.h>
#include <Mferror.h>
#pragma comment(lib, "ole32")
#pragma comment(lib, "mfplat")
#pragma comment(lib, "mfreadwrite")
#pragma comment(lib, "mfuuid")
int main()
{
HRESULT hr = CoInitializeEx(0, COINIT_MULTITHREADED);
hr = MFStartup(MF_VERSION);
IMFMediaType *pMediaType;
IMFMediaType *pMediaTypeOut;
IMFSourceReader *pSourceReader;
IMFAttributes *pAttributes;
IMFSinkWriter *pSinkWriter;
// Load souce file
hr = MFCreateSourceReaderFromURL(
L"fujitsu.mp4",
NULL,
&pSourceReader
);
// Load create SinkWriter of output file
hr = MFCreateSinkWriterFromURL(
L"Output.mp4",
NULL,
NULL,
&pSinkWriter
);
// get media type of input file
hr = pSourceReader->GetCurrentMediaType(
MF_SOURCE_READER_FIRST_AUDIO_STREAM,
&pMediaType);
// set media type for output file
hr = MFCreateMediaType(&pMediaTypeOut);
// set major type for output file
hr = pMediaTypeOut->SetGUID(
MF_MT_MAJOR_TYPE,
MFMediaType_Audio
);
pMediaTypeOut->SetUINT32(MF_MT_AAC_AUDIO_PROFILE_LEVEL_INDICATION, 0x29);
// set audio format for output file
hr = pMediaTypeOut->SetGUID(
MF_MT_SUBTYPE,
MFAudioFormat_AAC
);
// set audio sample rate for output file
hr = pMediaTypeOut->SetUINT32(
MF_MT_AUDIO_SAMPLES_PER_SECOND,
44100
);
// set audio number channal for output file
hr = pMediaTypeOut->SetUINT32(
MF_MT_AUDIO_NUM_CHANNELS,
2
);
// set audio bit depth for output file
hr = pMediaTypeOut->SetUINT32(
MF_MT_AUDIO_BITS_PER_SAMPLE,
16
);
hr = pMediaTypeOut->SetUINT32(
MF_MT_AUDIO_AVG_BYTES_PER_SECOND,
(UINT32)(96000/8 + 0.5)
);
hr = pMediaTypeOut->SetUINT32(
MF_MT_AUDIO_BLOCK_ALIGNMENT,
1
);
DWORD nWriterStreamIndex = -1;
hr = pSinkWriter->AddStream(pMediaTypeOut, &nWriterStreamIndex);
hr = pSinkWriter->BeginWriting();
_com_error err(hr);
LPCTSTR errMsg = err.ErrorMessage();
for (;;)
{
DWORD nStreamIndex, nStreamFlags;
LONGLONG nTime;
IMFSample *pSample;
hr = pSourceReader->ReadSample(
MF_SOURCE_READER_FIRST_AUDIO_STREAM,
0,
&nStreamIndex,
&nStreamFlags,
&nTime,
&pSample);
if (pSample)
{
hr = pSinkWriter->WriteSample(
nWriterStreamIndex,
pSample
);
}
if (nStreamFlags & MF_SOURCE_READERF_ENDOFSTREAM)
{
break;
}
}
hr = pSinkWriter->Finalize();
return 0;
}
Show ID3D11Texture2D on Screen
Hi, I have an RGBA ID3D11Texture2D that I want to display on screen in an application window that I create. I've followed relevant tutorials about creating a swap chain, and render target views, but I can't figure out how to actually make my texture show up on the screen. I tried to get the back buffer as an ID3D11Texture2D from the swap chain and then copy my texture into it using CopyResource, but that didn't work. Any ideas?
Thanks!
Abhishek Bhargava
H265 Encoder Missing
This page says Windows 10 ships with software H265 encoder MFT.
I’m using Windows 10 Pro version 1803 (OS Build 17134.950) and I don’t have Mfh265enc.dll anywhere on my system. MFTEnumEx API doesn’t find any software encoders either, only NVidia hardware on my system, and NVidia + Intel hardware encoders on a dual-GPU laptop my client’s using to test my software.
Is H265 encoder an optional OS component? If so, how to install it?
Thanks in advance
Reading/writing metadata in a UWP app
I'm trying to read/write metadata for an MP4 video from a UWP app using the Media Foundation APIs.
I have the following code
co_await winrt::resume_background(); check_hresult(MFStartup(MF_VERSION)); com_ptr<IMFByteStream> pByteStream{ nullptr }; check_hresult(MFCreateMFByteStreamOnStreamEx(inStream.as<IUnknown>().get(), pByteStream.put())); com_ptr<IMFSourceResolver> pSourceResolver{ nullptr }; check_hresult(MFCreateSourceResolver(pSourceResolver.put())); MF_OBJECT_TYPE objectType; com_ptr<IUnknown> pUnknownSource{ nullptr }; check_hresult(pSourceResolver->CreateObjectFromByteStream(pByteStream.get(), nullptr, MF_RESOLUTION_MEDIASOURCE | MF_RESOLUTION_READ | MF_RESOLUTION_WRITE, nullptr, &objectType, pUnknownSource.put())); if (objectType != MF_OBJECT_MEDIASOURCE) { co_return; } com_ptr<IMFMediaSource> pMediaSource = pUnknownSource.as<IMFMediaSource>(); com_ptr<IMFPresentationDescriptor> pPresentationDescriptor{ nullptr }; check_hresult(pMediaSource->CreatePresentationDescriptor(pPresentationDescriptor.put())); com_ptr<IMFMetadataProvider> pMetadataProvider{ nullptr }; check_hresult(MFGetService(pMediaSource.as<IUnknown>().get(), MF_METADATA_PROVIDER_SERVICE, guid_of<IMFMetadataProvider>(), pMetadataProvider.put_void()));
It fails at the last line with the exception 'The object does not support the specified service'. I've read that you are supposed to use an IPropertyStore instead, like this
com_ptr<IPropertyStore> pPropertyStore{ nullptr }; check_hresult(MFGetService(pMediaSource.as<IUnknown>().get(), MF_PROPERTY_HANDLER_SERVICE, guid_of<IPropertyStore>(), pPropertyStore.put_void()));
However, MF_PROPERTY_HANDLER_SERVICE is in the desktop API partition and cannot be used from UWP apps.
What is the correct way to do this? I can use UWP's VideoProperties class instead, but I wanted a little bit more control. I'm sure the VideoProperties class must use Media Foundation internally.
Media Foundation and Windows Explorer reporting incorrect video resolution, 2560x1440 instead of 1920x1080
https://teleport.blob.core.windows.net/content/should_be_1080p.mp4
Also vote here : https://aka.ms/AA4y7a2