Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

Can IMFMediaEngineEx support output stereoscopic 3D video side by side?

$
0
0

Using following code, to play a stereoscopic 3D video

   ::MFStartup(MF_VERSION);
    CComPtr<IMFMediaEngineClassFactory> spCF;
    spCF.CoCreateInstance(CLSID_MFMediaEngineClassFactory);
    CComPtr<IMFAttributes> spAttr;
    ::MFCreateAttributes(&spAttr, 0);
    spAttr->SetUINT64(MF_MEDIA_ENGINE_PLAYBACK_HWND, (UINT64)hWnd/*(UINT64)GetSafeHwnd()*/);
    spAttr->SetUnknown(MF_MEDIA_ENGINE_CALLBACK, &nn);

    HRESULT  hr = spCF->CreateInstance(0, spAttr, &m_spEngine);
    m_spEngine->QueryInterface(__uuidof(IMFMediaEngineEx), (void**)&m_spEngineEx);
    m_spEngine->SetSource(CComBSTR(L"g:\\mdjsj3-sbs.mp4"));

    m_spEngine->Play();

I got only left or maybe right view,

but what I want is side by side view,

Can IMFMediaEngineEx support output MVC 3D video side by side?






License for using Media Foundation H.264 video decoder

$
0
0

Hi,

I am a software developer. We are developing a PC application that can be used to decode H.264 video from legal video source providers. The technology we currently utilized is Microsoft Media Foundation, specifically the Media Foundation H.264 video decoder.

https://docs.microsoft.com/en-us/windows/desktop/medfound/h-264-video-decoder

I'd like to ask whether the H.264 decoder license fee is covered by Microsoft in this case, and can be used legally.

As I checked the License Terms of Windows 10:

H.264/AVC and MPEG-4 visual standards and VC-1 video standards. The software may include H.264/MPEG-4 AVC and/or VC-1 decoding technology. MPEG LA, L.L.C. requires this notice:

THIS PRODUCT IS LICENSED UNDER THE AVC, THE VC-1, AND THE MPEG-4 PART 2 VISUAL PATENT PORTFOLIO LICENSES FOR THE PERSONAL AND NON-COMMERCIAL USE OF A CONSUMER TO (i) ENCODE VIDEO IN COMPLIANCE WITH THE ABOVE STANDARDS (“VIDEO STANDARDS”) AND/OR (ii) DECODE AVC, VC-1, AND MPEG-4 PART 2 VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL AND NON-COMMERCIAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE SUCH VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE (AKA.MS/MPEGLA).

We as a software developer using Microsoft Media Foundation do not need to pay the license fee for the H.264 decoder. We can legally distribute it to Windows end users, and end users can legally use our application for personal and non-commercial use.

Is it correct? Any legal concern using Media Foundation H.264 decoder?

Thank you

Where can I find the GUIDs to the MFTs?

$
0
0

Hi,

do you know if there is a header file (somewhere in the Microsoft folder) where all CLID GUIDs are defined for AMD, NVIDIA and Intel decoder / encoder?

My specific problem is that if a system is able to setup two hardware MFTs (e.g. from AMD as well as from Intel) - how is it possible to check if I activate the correct MFT? I can define all GUIDs by myself but this is not really cool.

HRESULT hr = MFTEnumEx (MFT_CATEGORY_VIDEO_ENCODER, unFlags, // Reserved NULL, // Input type&info, // Output type&ppActivate, &count); if (SUCCEEDED (hr)) { for(int i=0; i<count; i++) { GUID clsid_guid; hr = ppActivate[i]->GetGUID (MFT_TRANSFORM_CLSID_Attribute, &clsid_guid);

if(clsid_guid == GUID_AMD_h264_encoder)

else if(clsid_guid == GUID_INTEL_h264_encoder) } }

I would most trust the GUIDs and not a friendly name. 


I just use the MFTs in the system. I don`t register them manually with (MFTRegisterLocalByCLSID )

Unable to set SetOutputType() on H265 MFT decoder.

$
0
0

Hi,

 I am trying to use the MFT H265 decoder in my code. I took the instance of decoder via MFTEnumEx() as CoCreateInstance() was not working for me. Then I set the SetDecoderInputMediaType(). I am facing issue while I am setting the output media type.

How I am setting the output media type?

I am doing something like: mVideoDecoder->GetOutputAvailableType(0, i, &outMediaType)) and then mVideoDecoder->SetOutputType(0, outMediaType, 0);

It is returning me 0xC00D36E6 (MF_E_ATTRIBUTENOTFOUND)

I took the MFTrace logs as well.

Here is the snippet from the logs:

////

1692,3720 11:44:25.76478 CMFTransformDetours::SetOutputType @000000007EE56018 Failed MT: MF_MT_MAJOR_TYPE=MEDIATYPE_Video;MF_MT_DEFAULT_STRIDE=0;MF_MT_FIXED_SIZE_SAMPLES=1;MF_MT_VIDEO_NOMINAL_RANGE=2;MF_MT_PIXEL_ASPECT_RATIO=4294967297 (1,1);MF_MT_ALL_SAMPLES_INDEPENDENT=1;MF_MT_ORIGINAL_4CC=1129727304;MF_MT_SAMPLE_SIZE=0;MF_MT_INTERLACE_MODE=7;MF_MT_SUBTYPE=MFVideoFormat_NV12

////

 

I tried via changing the output media type attributes as well:

//

5388,1404 12:19:15.58190 CMFTransformDetours::SetOutputType @000000007AA31998 Failed MT: MF_MT_MAJOR_TYPE=MEDIATYPE_Video;MF_MT_DEFAULT_STRIDE=0;MF_MT_FIXED_SIZE_SAMPLES=0;MF_MT_VIDEO_NOMINAL_RANGE=2;MF_MT_PIXEL_ASPECT_RATIO=4294967297 (1,1);MF_MT_ALL_SAMPLES_INDEPENDENT=0;MF_MT_ORIGINAL_4CC=1129727304;MF_MT_SAMPLE_SIZE=0;MF_MT_INTERLACE_MODE=7;MF_MT_SUBTYPE=MFVideoFormat_NV12

//

Please help me in this regards. I have spent lot of my time but in vain.

PS: I took the free version of HEVC Video extension available in Microsoft's store.

How do I read a 3D left-right movie's full frame?

$
0
0

I'm using Media Foundation to read movies. When getting a 3D left-right movie (MP4) I want to get the full frame, as display for example in Windows Media Player.

As an alternative, how do I read the two halves? Currently my normal path simply reads the left side of such movies (I get a frame size half of the movie's frame size, that is, 1920x2160).

How to force quality-based mode for software H264 encoder using IMFSinkWriterEx on Windows 8+?

$
0
0

I'm trying to make simple screen recorder which uses Desktop Duplication API for desktop image capture and Media Foundation IMFSinkWriterEx for MP4 encoding. H.264 Video Encoder documentation says that on Windows 8 CODECAPI_AVEncCommonQuality property can be set at any time during encoding. In my case the following code works only with hardware transforms.

// Creating sink writer

// Setting output media type

// Setting input media type

if (SUCCEEDED(hr)) { hr = m_pSinkWriter->GetServiceForStream(m_dwVideoStream, GUID_NULL, _uuidof(ICodecAPI), reinterpret_cast<void **>(&pCodecApi)); } if (FAILED(hr)) { Logger::LogError(hr, "Failed to QI for ICodecAPI"); goto done; } VARIANT var; if (SUCCEEDED(hr)) { VariantInit(&var); var.vt = VT_UI4; var.ulVal = eAVEncCommonRateControlMode_Quality; hr = pCodecApi->SetValue(&CODECAPI_AVEncCommonRateControlMode, &var); VariantClear(&var); } if (FAILED(hr)) { Logger::LogError(hr, "Failed to set quality-based VBR mode"); goto done; } if (SUCCEEDED(hr)) { VariantInit(&var); var.vt = VT_UI4; var.ulVal = quality; hr = pCodecApi->SetValue(&CODECAPI_AVEncCommonQuality, &var); VariantClear(&var); } if (FAILED(hr)) { Logger::LogError(hr, "Failed to set quality value for video stream"); goto done; } if (SUCCEEDED(hr)) { VariantInit(&var); var.vt = VT_UI4; var.ulVal = 50; hr = pCodecApi->SetValue(&CODECAPI_AVEncCommonQualityVsSpeed, &var); VariantClear(&var); } if (FAILED(hr)) { Logger::LogError(hr, "Failed to set quality vs speed value for video stream"); goto done; }

// All methods above always return S_OK with or without hardware transforms

I have NVENC and Intel QSync video encoder on my PC. For some reason NVENC throws cathastophic failure on activation so Sink Writer uses Intel QSync encoder. And the quality level works fine. But with software encoder it doesn't work I tried setting quality level to 1 and still produced video quality didn't change. Also I tried creating and setting IPropertyStore like this: Quality-Based Variable Bit Rate Encoding then added it to SinkWriter creation attributes this way didn't work even with hardware encoder. Can someone explain how to set quality for software encoder?

Sorry I cannot post links here until they verify my account.


How do I read a 3D left-right movie's full frame using a source reader?

$
0
0

I'm using an IMFSourceReader to read a movie. When I open a 3D left-right MP4 movie I can't seem to manage to get all the data, I'm only getting half the frame.

I'm aware of MF_ENABLE_3DVIDEO_OUTPUT, and tried to set it on the media type, but that didn't change anything. I'm not sure if that's the right option or where to set it.

The source reader tells me that the width is half of the frame's size (1920x2160 for a 4K movie), but when I use GetBufferCount on the sample, the result is 1. So I have no idea how to get all the data of the frame.

I looked at the DX11VideoRenderer sample, and that seems to assume that GetBufferCount returns 2. It however doesn't use IMFSourceReader so I'm not sure how to apply what it does to that scenario.

Optimally, what I want is to use MF3DVideoOutputType_BaseView and get the full 4K source image.


No Media playback sound after update to latest insider build

$
0
0
After I updated my Laptop to the latest build of windows insider the audio from speakers and earphones aren't coming out. There is a continuous buzz sound in the earphones but speakers don't respond at all.

why the interface IMFTrackedSample::SetAllocator always return MF_E_NOTACCEPTING?

$
0
0

hi all,

when I write video decoder with DXVA2 and media foundation, refer to: http://msdn.microsoft.com/en-us/library/windows/desktop/aa965266(v=vs.85).aspx

Decoding:

I Set Allocator successfully in firstly, then always SetAllocator() failed with the value "MF_E_NOTACCEPTING", popup error message "the callee is currently not accepting further input.", within my Callback, I only get IMFAsyncResult status.

can anyone tell me why it, how to write the callback for render invoking?

thanks

Jackic 


one work one gain!


Freelance WMF Developer Needed

$
0
0

Hi guys, 

We are looking for a WMF expert to help develop a project to encode and decode raw frames in H264/H265 using WMF. You can find the requirement spec here: https://docs.google.com/document/d/16cUDXwJ9vQLAh3JsUbIHpFAHBklWr0oa0-IVhIX_yUM/edit#

If you are interested, please contact me at max (at) kazendi.com

Best,

Max

Reading HEIF/HEIC files

$
0
0

Hi team,

I was wondering if I can use Media Foundation to read HEIC/HEIF files. I tried using IMFSourceReader but I get the error:

0xc00d36e5 The operation on the current offset is not permitted

when I attempt to create a source reader using MFCreateSourceReaderFromURL.

I have the HEIF and HEVC extensions installed on my system and I was able to open HEIC files using Windows photo viewer. I was able to read HEVC encoded MKV and MP4 files using IMFSourceReader on the same system.

Does it matter which Microsoft Account I used to install these extensions on my system?

Is this the write interface to use? Or do I have to use a third party library such as libheif or Nokia's HEIF Reader/Writer to parse the HEIF structure and use the IMFByteStream to supply the raw HEVC stream to create a source reader?

Any help would be appreciated.

Regards,

Dinesh

Failed to build bitstream DXVA2.0

$
0
0
Hi everyone,

I am in a case where the IDirectXVideoDecoder::GetBuffer function returns a BitStreamDateBuffer that is too small compared to my slice size.
Can we have more details on how the buffer size is computed inside this function?

The frame is correctly decoded when not using DXVA.

I can send a link to the video if that can help.

Best regards,
Arnaud

DX11 Video Renderer sample code is incomplete.

$
0
0
Your ref : http://code.msdn.microsoft.com/windowsdesktop/DirectX-11-Video-Renderer-0e749100

Dear Sirs,

    So ... that project creates a DLL, which isn't a standard EVR so I have no clue how to fire it up.

Feeback states that the only way to test it is to use a closed source version of topedit in the Windows 8 SDK.

That isn't very helpful as that doesn't demonstrate how to use the thing.

Please provide sample code - the simpler the better that demonstrates how to use this DirectX11 Video Renderer or a derivative to throw a video texture on some simple geometry on the screen.

As a follow up, please demonstrate multiple video textures playing simultaneously to demonstrate the API supports this feature. If it doesn't, please add this support :)

Sorry to give you a hard time but I need a solid video API and if Windows doesn't provide one, it's time to look to other operating systems for a robust solution.

Regards,
Steve.

Colour Conversion IMFTransform throws exception in COLORCNV.DLL

$
0
0

Hi folks,

I'm trying to use the IMFTransform interface to convert a YUY2 video frame buffer (single frame) to RGB24. I have checked the IMFSample* I am getting, and it has the correct type and the correct 1280*960*2 bytes of data. Here is my code I am using to convert to RGB24:

	// ----------------------------------------------------------------------------
	// Convert data to RGB colorspace
	// ----------------------------------------------------------------------------
	if (bNeedsConvert)
	{
		// Create instance of IMFTransform interface pointer as CColorConvertDMO
		hr = CoCreateInstance(CLSID_CColorConvertDMO, NULL, CLSCTX_INPROC_SERVER, IID_IMFTransform, (LPVOID*)&pTransform);
		if (FAILED(hr))
		{
			fprintf(stderr, "CoCreateInstance for IMFTransform failed, code 0x%x\n", hr);
			return hr;
		}

		// Set input type as media type of our input stream
		hr = pTransform->SetInputType(0, pType, 0);
		if (FAILED(hr))
		{
			fprintf(stderr, "pTransform->SetInputType failed, code 0x%x\n", hr);
			return hr;
		}

		// Create new media type
		hr = MFCreateMediaType(&pOutputType);

		// Set colorspace in output type to RGB24, uncompressed, not interlaced
		hr = pOutputType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
		hr = pOutputType->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_RGB24);
		hr = pOutputType->SetUINT32(MF_MT_FIXED_SIZE_SAMPLES, 1);
		hr = pOutputType->SetUINT32(MF_MT_ALL_SAMPLES_INDEPENDENT, 1);
		hr = pOutputType->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive);

		// Copy data from input type to output type
		// Frame Size
		UINT32 uX, uY;
		hr = GetFrameSize(pType, &uX, &uY);
		hr = SetFrameSize(pOutputType, uX, uY);
		// Frame rate
		hr = GetFrameRate(pType, &uX, &uY);
		hr = SetFrameRate(pOutputType, uX, uY);
		// Pixel aspect ratio
		hr = GetPixelAspectRatio(pType, &uX, &uY);
		hr = SetPixelAspectRatio(pOutputType, uX, uY);

		// Set transform output type
		hr = pTransform->SetOutputType(0, pOutputType, 0);
		if (FAILED(hr))
		{
			fprintf(stderr, "pTransform->SetOutputType failed, code 0x%x\n", hr);
			return hr;
		}

		// Notify the transform we are about to begin streaming data
		hr = pTransform->ProcessMessage(MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, 0);
		if (FAILED(hr))
		{
			fprintf(stderr, "pTransform->ProcessMessage failed, code 0x%x\n", hr);
			return hr;
		}

		// Send our input sample to the transform
		hr = pTransform->ProcessInput(0, pSample, 0);
		if (FAILED(hr))
		{
			fprintf(stderr, "pTransform->ProcessInput failed, code 0x%x\n", hr);
			return hr;
		}

		MFT_OUTPUT_DATA_BUFFER modbBuffer;
		DWORD dwStatus = 0;
		hr = pTransform->ProcessOutput(0, 1, &modbBuffer, &dwStatus);
		if (FAILED(hr))
		{
			fprintf(stderr, "pTransform->ProcessOutput failed, code 0x%x\n", hr);
			return hr;
		}
	}

All HRESULTS are OK. At line pTransform->ProcessOutput, I get:

Exception thrown at 0x00007FFFCD03A1CC (COLORCNV.DLL) in cam.exe: 0xC0000005: Access violation reading location 0xFFFFFFFFFFFFFFFF.

I've checked all my pointers I send to the transform and they are good. Bit stumped here. TIA for any input.


Windows Audio Endpoint Builder won't start on Win 7

$
0
0
I did everything to make my audio work. I get error 126, 1067 etc.  What can I do to get this fixed/

External camera triggers with Media Foundation

$
0
0

Hi,

I need to be able to use external physical buttons on a UVC web cam with Media Foundation (ideally). When pushed, the external button will need to be detected so the application I'm working on can save the current camera framebuffer image. I'm using C++ with Win32 APIs.

DirectShow has the ability to access an external button via IAMVideoControl::SetMode and using VideoControlFlags + VideoControlFlags_ExternalTriggerEnable, but I can't seem to find the equivalent functionality when using Media Foundation.

Is my only option to rewrite everything in DirectShow (which is deprecated, I think) or is there a way to get things working with Media Foundation? Also, I need support for Windows 7 and up.

Thanks!



IMFSinkWriter: write AAC stream

$
0
0

I`m trying to write a MP4 with one video stream h264 and one audio stream aac. The video stream is written to the file as expected. The audio stream is too short and it seems to me that the sound is a little bit pitched...

Using Mediainfo to get the details of the video file there are two things which fall in the eye:

First: the sample rate of the audio stream:

- Sampling Rate: 48.0 kHz

- Frame rate: 46.875 fps (1024 SPF) //<<- I`ve seen this in other video files as well.. maybe no problem

Second: the duration of the audio stream

- Duration: 2s 752ms

- mdhd_Duration 2752

This seems to be definitely wrong. The video stream as well as the container have a duration of 3s 42ms...

Maybe there is something wrong with my audio timestamp calculation? the audio stream is PCM (48kHz, 16bits, signed, stereo). I want to write 1024 frame packages to the sink writer as long as there are enough bytes to write.

static constexpr int64 k100NanoSec = 10000000;
int64 sampleTime = k100NanoSec * currentSampleNum / sampleRate;
int64 duration = k100NanoSec * numFrames / sampleRate;

sample->SetSampleTime (sampleTime);
sample->SetSampleDuration (duration);
currentSampleNum += numFrames;

The sampleNum is usually 1024 (AAC) or less. "currentSampleNum" starts at 0 and is increased by the sampleNum.  The sampleRate is 25Hz. The duration is always 213333 and the sampletime starts with 0, 213333, 426666...

Do you have any further clues where the problem could lie?






IMFSinkWriter: missing bytes in AAC stream

$
0
0

I`m trying to write a MP4 with one video stream h264 and one audio stream aac. The video stream is written to the file as expected. The audio stream is too short...

Using Mediainfo to get the details of the video file. The duration of the audio stream is shorter than the duration of the video stream (3s 42ms). That seems to me definitely wrong. Moreover the audio stream has two properties in the MediaInfo dialog.

- Duration: 2s 752ms

- mdhd_Duration 2752

Maybe there is something wrong with my audio timestamp calculation? the audio stream is PCM (48kHz, 16bits, signed, stereo). I want to write 1024 frame packages to the sink writer as long as there are enough bytes to write.

static constexpr int64 k100NanoSec = 10000000;
int64 sampleTime = k100NanoSec * currentSampleNum / sampleRate;
int64 duration = k100NanoSec * numFrames / sampleRate;

sample->SetSampleTime (sampleTime);
sample->SetSampleDuration (duration);
currentSampleNum += numFrames;

The sampleNum is usually 1024 (AAC) or less. "currentSampleNum" starts at 0 and is increased by the sampleNum.  The sampleRate is 25Hz. The duration is always 213333 and the sampletime starts with 0, 213333, 426666...

Do you have any further clues where the problem could lie?

 I know that the last PCM packet for the IMFSinkwriter has less than 1024 frames. Maybe the aac converter within the SinkWriter is not able to produce the output of this last packet. Using MFTrace I can see that the ProcessOutput call always claims to have too less data. Is my assumption correct that all streams gets drained if I call finalize()?









Play several videos with MediaEngine, what needs to be done with DXGIDeviceManager?

$
0
0

I want to play several videos at once and show them together in the same output (so I can show as a grid or overlap etc..)

I was hoping to use Direct2D.

I was looking at MediaEngine as it seems quite easy, but I am unsure about what to do with DXGIDeviceManager, do I create one for all instances of MediaEngine, or one each?

Whats the best approach.


How to extract R8_UNORM and R8G8_UNORM from an NV12 texture, preferably with no copy?

$
0
0

I have a Direct3D11 Texture2D / DXGI surface with a pixel format of NV12

Is there a way to get the Y Luminance R8_UNORM portion of it in another texture? preferably shared without copying.

And the same for the Chroma part?

I'll be happy if I could create 2 Direct2D bitmap planes from the texture.

I don't want to change the pixel format to RGB.

I want the textures to stay in GPU memory without copying them over the bus.

If that's not possible, can it be done with a copy?

If I lock the bits and create a D2D bitmap from pointer in a raw fashion, will it copy over the bus?

Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>